[jira] [Commented] (MAPREDUCE-4688) setJarByClass does not work under JBoss AS 7

2012-09-27 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464526#comment-13464526
 ] 

Luke Lu commented on MAPREDUCE-4688:


It'd be much easier and more flexible for webapps to submit jobs via Oozie 
(another J2EE service). Now you have J2EE and SOA :)

 setJarByClass does not work under JBoss AS 7
 

 Key: MAPREDUCE-4688
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4688
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.2, 1.0.3
 Environment: Hadoop Cloudera CDH3 Cluster w/Hadoop 0.20.2 on CentOS 
 5/6
 Client on JBoss AS 7.1.1.Final CentOS/Windows
Reporter: Philippe
Priority: Minor
  Labels: patch

 Hello,
 I’m using Hadoop as a client from a J2EE web application. One of the lib 
 within my EAR is a jar containing several Map/Reduce jobs. Using JBoss AS 4 
 in the past, I had no problem running the jobs with the following code:
 {code}
 try {
 final Configuration conf = HBaseConfiguration.create();
 // Load all Hadoop configuration
 conf.addResource(core-site.xml);
 conf.addResource(hdfs-site.xml);
 conf.addResource(mapred-site.xml);
 conf.addResource(hbase-site.xml);
 final Job job = new Job(conf, My Job);
 job.setJarByClass(MyJobClass.getClass());
   
   
 TableMapReduceUtil.initTableMapperJob(...);
   
 TableMapReduceUtil.initTableReducerJob(...);
   
 final boolean status = job.waitForCompletion(true);
 } ...
 {code}
 Since, we have moved to JBoss AS 7 and the method setJarByClass is no longer 
 working. Indeed, in 
 *org.apache.hadoop.mapred.JobConf.findContainingJar(Class)* the retrieved URL 
 does not have a *jar* protocol, but a JBoss *vfs* protocol. So, it always 
 return null and the jar is not sent to the Map/Reduce cluster.
 With the VFS protocol the resource name may be or may not be the actual 
 system file name of the resource. I mean, the class file is within the jar 
 file which may be within an ear file in case of a non-exploded deployment, so 
 there are not system File corresponding to the resource. Though, I guess 
 similar issues may happen with jar: protocol.
 In order to make the job working with JBoss AS-7, I did the following 
 implementation of the Job class. This override the setJarByClass mechanism, 
 by creating a temporary jar file from the actual jar file read from vfs.
 {code}
 /**
  * Patch of Map/Red Job to handle VFS jar file
  */
 public class VFSJob extends Job
 {
   
 /** Logger */
 private static final transient Logger logger = 
 LoggerFactory.getLogger(VFSJob.class);
 public VFSJob() throws IOException {
 super();
 }
 public VFSJob(Configuration conf) throws IOException {
 super(conf);
 }
 public VFSJob(Configuration conf, String jobName) throws IOException {
 super(conf, jobName);
 }
   
 private File temporaryJarFile;
 /**
  * Patch of setJarByClass to handle VFS
  */
 @Override
 public void setJarByClass(Class? cls) {
final ClassLoader loader = cls.getClassLoader();
 final String classFile = cls.getName().replaceAll(\\., /) + 
 .class;
 JarInputStream is = null;
 JarOutputStream os = null;
 try {
 final EnumerationURL itr = loader.getResources(classFile);
 while (itr.hasMoreElements()) {
 final URL classUrl = itr.nextElement();
 // This is the trick
 if (!vfs.equals(classUrl.getProtocol())) {
 continue;
 }
  
 final String jarFile = classUrl.getFile().substring(0, 
 classUrl.getFile().length() - (classFile.length() + 1) ); //+1 because of '/'
 final URL jarUrl = new URL(classUrl.getProtocol(), 
 classUrl.getHost(), classUrl.getPort(), jarFile, new 
 org.jboss.vfs.protocol.VirtualFileURLStreamHandler());
 
 temporaryJarFile = File.createTempFile(mapred, .jar);
 is = (JarInputStream) jarUrl.openStream();
 os = new JarOutputStream(new 
 FileOutputStream(temporaryJarFile));
 final byte[] buffer = new byte[2048];
 for (JarEntry entry = is.getNextJarEntry(); entry !=null; 
 entry = is.getNextJarEntry()) {
 os.putNextEntry(entry);
 int bytesRead;
 while ((bytesRead = is.read(buffer)) != -1) {
 os.write(buffer, 0, bytesRead);
 }
 }
 this.conf.setJar(temporaryJarFile.getPath());
 return;
 

[jira] [Commented] (MAPREDUCE-4282) Convert Forrest docs to APT

2012-09-27 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464613#comment-13464613
 ] 

Tom White commented on MAPREDUCE-4282:
--

Thanks for looking at this Eli. I tried to build the site using

{noformat}
mvn clean site; mvn -o site:stage -DstagingDirectory=/tmp/hadoop-site
{noformat}

but I got

{noformat}
Error during page generation: Files 'apt/index.apt.vm' clashes with existing 
'/Users/tom/workspace/hadoop-trunk/hadoop-project/src/site/apt/index.apt'.
{noformat}

The index.apt file contains documentation about distcp, so it should be renamed 
or merged with distcp.apt. When I fixed that problem the build succeeded but 
the rendering of the files looked wrong - it just had the raw markup.

I noticed that you put all the files at the top-level rather than in modules 
(Common, HDFS, MapReduce). It probably makes sense to split them up more (see 
HADOOP-8860), although the links get a bit more complex in that case due to the 
way that Maven uses the POM hierarchy to create the structure.

 Convert Forrest docs to APT
 ---

 Key: MAPREDUCE-4282
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4282
 Project: Hadoop Map/Reduce
  Issue Type: Task
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Reisman
  Labels: newbie
 Attachments: MAPREDUCE-4282-1.patch, MAPREDUCE-4282-2.patch


 MR side of HADOOP-8427. Not all of the old forrest docs in 
 src/documentation/content/xdocs have been converted over to APT yet, let's do 
 that and remove the forrest docs.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4278) cannot run two local jobs in parallel from the same gateway.

2012-09-27 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464638#comment-13464638
 ] 

Tom White commented on MAPREDUCE-4278:
--

bq. This could be avoided by adding a timestamp component to local job ids?

It looks like getStagingAreaDir() is using a random number to generate a unique 
staging directory, so you could reuse that unique identifier for the job ID. 
Also, the local job directory (localRunner) needs to be made unique too, 
otherwise the job configuration file could clash. 

 cannot run two local jobs in parallel from the same gateway.
 

 Key: MAPREDUCE-4278
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4278
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.205.0, 0.23.1, 1.0.2
Reporter: Araceli Henley

 I cannot run two local mode jobs from Pig in parallel from the same gateway, 
 this is a typical use case. If I re-run the tests sequentially, then the test 
 pass. This seems to be a problem from Hadoop.
 Additionally, the pig harness, expects to be able to run 
 Pig-version-undertest against Pig-version-stable from the same gateway.
 To replicate the error:
 I have two clusters running from the same gateway.
 If I run the Pig regression suites nightly.conf in local mode in paralell - 
 once on each cluster. Conflicts in M/R local mode result in failures in the 
 tests. 
 ERROR1:
 org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
 output/file.out in any of the configured local directories
 at
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
 at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
 at
 org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
 at org.apache.hadoop.mapred.Task.calculateOutputSize(Task.java:944)
 at org.apache.hadoop.mapred.Task.sendLastUpdate(Task.java:924)
 at org.apache.hadoop.mapred.Task.done(Task.java:875)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:374)
 ---
 ERROR2:
 2012-05-17 20:25:36,762 [main] INFO
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
 -
 HadoopJobId: job_local_0001
 2012-05-17 20:25:36,778 [Thread-3] INFO  org.apache.hadoop.mapred.Task -
 Using ResourceCalculatorPlugin : org.apache.
 hadoop.util.LinuxResourceCalculatorPlugin@ffa490e
 2012-05-17 20:25:36,837 [Thread-3] WARN
 org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
 java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getLoadFunc(PigInputFormat.java
 :153)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.createRecordReader(PigInputForm
 at.java:106)
 at
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.init(MapTask.java:489)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
 2012-05-17 20:25:41,291 [main] INFO
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4610) Support deprecated mapreduce.job.counters.limit property in MR2

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464669#comment-13464669
 ] 

Hudson commented on MAPREDUCE-4610:
---

Integrated in Hadoop-Hdfs-0.23-Build #387 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/387/])
svn merge -c 1379022 FIXES: MAPREDUCE-4610. Support deprecated 
mapreduce.job.counters.limit property in MR2. (Revision 1390670)

 Result = UNSTABLE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390670
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/Limits.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java


 Support deprecated mapreduce.job.counters.limit property in MR2
 ---

 Key: MAPREDUCE-4610
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4610
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 2.0.0-alpha
Reporter: Tom White
Assignee: Tom White
 Fix For: 2.0.2-alpha

 Attachments: MAPREDUCE-4610.patch


 The property mapreduce.job.counters.limit was introduced in MAPREDUCE-1943, 
 but the mechanism was changed in MAPREDUCE-901 where the property name was 
 changed to mapreduce.job.counters.max without supporting the old name. We 
 should deprecate but honour the old name to make it easier for folks to move 
 from Hadoop 1 to Hadoop 2.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4647) We should only unjar jobjar if there is a lib directory in it.

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464671#comment-13464671
 ] 

Hudson commented on MAPREDUCE-4647:
---

Integrated in Hadoop-Hdfs-0.23-Build #387 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/387/])
MAPREDUCE-4647. We should only unjar jobjar if there is a lib directory in 
it. (Robert Evans via tgraves) (Revision 1390560)

 Result = UNSTABLE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390560
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapred/LocalDistributedCacheManager.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalResource.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalResourceType.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LocalResourcePBImpl.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/LocalResourceRequest.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/LocalizedResource.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/event/LocalizerResourceRequestEvent.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalResource.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalResourcesTrackerImpl.java
* 
/hadoop/common/branches/branch-0.23/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceRetention.java


 We should only unjar jobjar if there is a lib directory in it.
 --

 Key: MAPREDUCE-4647
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4647
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.3
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 3.0.0, 0.23.4, 2.0.3-alpha

 Attachments: MR-4647-branch-0.23.txt, MR-4647.txt, MR-4647.txt, 
 MR-4647.txt, MR-4647.txt


 For 

[jira] [Commented] (MAPREDUCE-4408) allow jobs to set a JAR that is in the distributed cached

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464674#comment-13464674
 ] 

Hudson commented on MAPREDUCE-4408:
---

Integrated in Hadoop-Hdfs-0.23-Build #387 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/387/])
svn merge -c 1377149 FIXES: MAPREDUCE-4408. allow jobs to set a JAR that is 
in the distributed cached (rkanter via tucu) (Revision 1390629)

 Result = UNSTABLE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390629
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/MiniMRYarnCluster.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobs.java


 allow jobs to set a JAR that is in the distributed cached
 -

 Key: MAPREDUCE-4408
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4408
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv1, mrv2
Affects Versions: 1.0.3, 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Robert Kanter
 Fix For: 1.2.0, 2.0.2-alpha, 0.23.4

 Attachments: MAPREDUCE-4408-branch-1.patch, MAPREDUCE-4408.patch, 
 MAPREDUCE-4408.patch


 Setting a job JAR with JobConf.setJar(String) and Job.setJar(String) assumes 
 that the JAR is local to the client submitting the job, thus it triggers 
 copying the JAR to HDFS and injecting it to the distributed cached.
 AFAIK, this is the only way to use uber JARs (JARs with JARs inside) in MR 
 jobs.
 For jobs launched by Oozie, all JARs are already in HDFS. In order for Oozie 
 to suport uber JARs (OOZIE-654) there should be a way for specifying as JAR a 
 JAR that is already in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4646) client does not receive job diagnostics for failed jobs

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464676#comment-13464676
 ] 

Hudson commented on MAPREDUCE-4646:
---

Integrated in Hadoop-Hdfs-0.23-Build #387 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/387/])
svn merge -c 1383709 FIXES: MAPREDUCE-4646. Fixed MR framework to send 
diagnostic information correctly to clients in case of failed jobs also. 
Contributed by Jason Lowe. (Revision 1390700)

 Result = UNSTABLE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390700
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestClientServiceDelegate.java


 client does not receive job diagnostics for failed jobs
 ---

 Key: MAPREDUCE-4646
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4646
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0, 2.0.1-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
 Fix For: 2.0.2-alpha, 0.23.4

 Attachments: MAPREDUCE-4646.patch, MAPREDUCE-4646.patch, 
 MAPREDUCE-4646.patch


 When a job fails the client is not showing any diagnostics.  For example, 
 running a fail job results in this not-so-helpful message from the client:
 {noformat}
 2012-09-07 21:12:00,649 INFO  [main] mapreduce.Job 
 (Job.java:monitorAndPrintJob(1308)) - Job job_1347052207658_0001 failed with 
 state FAILED due to:
 {noformat}
 ...and nothing else to go with it indicating what went wrong.  The job 
 diagnostics are apparently not making it back to the client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4686) hadoop-mapreduce-client-core fails compilation in Eclipse due to missing Avro-generated classes

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464689#comment-13464689
 ] 

Hudson commented on MAPREDUCE-4686:
---

Integrated in Hadoop-Hdfs-trunk #1178 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1178/])
MAPREDUCE-4686. hadoop-mapreduce-client-core fails compilation in Eclipse 
due to missing Avro-generated classes. Contributed by Chris Nauroth. (harsh) 
(Revision 1390446)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390446
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/pom.xml


 hadoop-mapreduce-client-core fails compilation in Eclipse due to missing 
 Avro-generated classes
 ---

 Key: MAPREDUCE-4686
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4686
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HADOOP-8848.patch


 After importing all of hadoop-common trunk into Eclipse with the m2e plugin, 
 the Avro-generated classes in hadoop-mapreduce-client-core don't show up on 
 Eclipse's classpath.  This causes compilation errors for anything that 
 depends on those classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4647) We should only unjar jobjar if there is a lib directory in it.

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464691#comment-13464691
 ] 

Hudson commented on MAPREDUCE-4647:
---

Integrated in Hadoop-Hdfs-trunk #1178 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1178/])
MAPREDUCE-4647. We should only unjar jobjar if there is a lib directory in 
it. (Robert Evans via tgraves) (Revision 1390557)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390557
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapred/LocalDistributedCacheManager.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalResource.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalResourceType.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LocalResourcePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/LocalResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/LocalizedResource.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/event/LocalizerResourceRequestEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalResource.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalResourcesTrackerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceRetention.java


 We should only unjar jobjar if there is a lib directory in it.
 --

 Key: MAPREDUCE-4647
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4647
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.3
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 3.0.0, 0.23.4, 2.0.3-alpha

 Attachments: MR-4647-branch-0.23.txt, MR-4647.txt, MR-4647.txt, 
 MR-4647.txt, MR-4647.txt


 For backwards compatibility we recently added made is so we would unjar the 
 job.jar and add anything to the classpath in the lib directory of that jar.  
 But this also slows job startup down a lot if the jar is large.  We should 
 only unjar it if actually doing so would add something new to the classpath.


[jira] [Commented] (MAPREDUCE-4253) Tests for mapreduce-client-core are lying under mapreduce-client-jobclient

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464692#comment-13464692
 ] 

Hudson commented on MAPREDUCE-4253:
---

Integrated in Hadoop-Hdfs-trunk #1178 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1178/])
Reverted MAPREDUCE-4253. (Revision 1390652)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390652
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestAuditLogger.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestIFile.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobConf.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestKeyValueTextInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestMultiFileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestMultiFileSplit.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestSequenceFileAsBinaryInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestSequenceFileAsBinaryOutputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestSequenceFileAsTextInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestSequenceFileInputFilter.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestSortedRanges.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestStatisticsCollector.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestTaskStatus.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestUtils.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestCounters.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/jobcontrol/TestControlledJob.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/util/TestProcfsBasedProcessTree.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestAuditLogger.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIndexCache.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestJobConf.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestKeyValueTextInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMultiFileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMultiFileSplit.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
* 

[jira] [Commented] (MAPREDUCE-4647) We should only unjar jobjar if there is a lib directory in it.

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464745#comment-13464745
 ] 

Hudson commented on MAPREDUCE-4647:
---

Integrated in Hadoop-Mapreduce-trunk #1209 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1209/])
MAPREDUCE-4647. We should only unjar jobjar if there is a lib directory in 
it. (Robert Evans via tgraves) (Revision 1390557)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390557
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapred/LocalDistributedCacheManager.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/v2/util/TestMRApps.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/YARNRunner.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalResource.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalResourceType.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/LocalResourcePBImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/LocalResourceRequest.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/LocalizedResource.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/event/LocalizerResourceRequestEvent.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalResource.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalResourcesTrackerImpl.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceRetention.java


 We should only unjar jobjar if there is a lib directory in it.
 --

 Key: MAPREDUCE-4647
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4647
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.3
Reporter: Robert Joseph Evans
Assignee: Robert Joseph Evans
 Fix For: 3.0.0, 0.23.4, 2.0.3-alpha

 Attachments: MR-4647-branch-0.23.txt, MR-4647.txt, MR-4647.txt, 
 MR-4647.txt, MR-4647.txt


 For backwards compatibility we recently added made is so we would unjar the 
 job.jar and add anything to the classpath in the lib directory of that jar.  
 But this also slows job startup down a lot if the jar is large.  We should 
 only unjar it if actually doing so would add something new to the 

[jira] [Commented] (MAPREDUCE-4253) Tests for mapreduce-client-core are lying under mapreduce-client-jobclient

2012-09-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464746#comment-13464746
 ] 

Hudson commented on MAPREDUCE-4253:
---

Integrated in Hadoop-Mapreduce-trunk #1209 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1209/])
Reverted MAPREDUCE-4253. (Revision 1390652)

 Result = SUCCESS
acmurthy : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1390652
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestAuditLogger.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestIFile.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestJobConf.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestKeyValueTextInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestMultiFileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestMultiFileSplit.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestSequenceFileAsBinaryInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestSequenceFileAsBinaryOutputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestSequenceFileAsTextInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestSequenceFileInputFilter.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestSortedRanges.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestStatisticsCollector.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestTaskStatus.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestUtils.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestCounters.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/jobcontrol/TestControlledJob.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/util/TestProcfsBasedProcessTree.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestAuditLogger.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIndexCache.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestJobConf.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestKeyValueTextInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMultiFileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMultiFileSplit.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
* 

[jira] [Commented] (MAPREDUCE-4651) Benchmarking random reads with DFSIO

2012-09-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464794#comment-13464794
 ] 

Tsz Wo (Nicholas), SZE commented on MAPREDUCE-4651:
---

Hi Konstantin, the patch no longer applies to trunk.  Could you update it?

 Benchmarking random reads with DFSIO
 

 Key: MAPREDUCE-4651
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4651
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
  Components: benchmarks, test
Affects Versions: 1.0.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 0.23.4

 Attachments: randomDFSIO.patch, randomDFSIO.patch, randomDFSIO.patch


 TestDFSIO measures throughput of HDFS write, read, and append operations. It 
 will be useful to have an option to use it for benchmarking random reads.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4464) Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()

2012-09-27 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated MAPREDUCE-4464:
---

Attachment: MAPREDUCE-4464.patch

I improved the message in Clint's patch a slight bit to indicate what to look 
at.

And I could also successfully reproduce the issue on a forced bad hostname 
machine (devel_vm.vm):

{code}
12/09/27 21:52:16 INFO mapred.JobClient: Task Id : 
attempt_201209272149_0001_r_00_2, Status : FAILED
Error: java.io.IOException: Invalid hostname found in tracker location: 
'http://devel_vm.vm:50060'
at 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2920)
at 
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2845)
{code}


 Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()
 -

 Key: MAPREDUCE-4464
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4464
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: task
Affects Versions: 1.0.0
Reporter: Clint Heath
Assignee: Clint Heath
Priority: Minor
 Attachments: MAPREDUCE-4464_new.patch, MAPREDUCE-4464.patch, 
 MAPREDUCE-4464.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 If DNS does not resolve hostnames properly, reduce tasks can fail with a very 
 misleading exception.
 as per my peer Ahmed's diagnosis:
 In ReduceTask, it seems that event.getTaskTrackerHttp() returns a malformed 
 URI, and so host from:
 {code}
 String host = u.getHost();
 {code}
 is evaluated to null and the NullPointerException is thrown afterwards in the 
 ConcurrentHashMap.
 I have written a patch to check for a null hostname condition when getHost is 
 called in the getMapCompletionEvents method and print an intelligible warning 
 message rather than suppressing it until later when it becomes confusing and 
 misleading.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4464) Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()

2012-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464854#comment-13464854
 ] 

Hadoop QA commented on MAPREDUCE-4464:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12546872/MAPREDUCE-4464.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/2886//console

This message is automatically generated.

 Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()
 -

 Key: MAPREDUCE-4464
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4464
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: task
Affects Versions: 1.0.0
Reporter: Clint Heath
Assignee: Clint Heath
Priority: Minor
 Attachments: MAPREDUCE-4464_new.patch, MAPREDUCE-4464.patch, 
 MAPREDUCE-4464.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 If DNS does not resolve hostnames properly, reduce tasks can fail with a very 
 misleading exception.
 as per my peer Ahmed's diagnosis:
 In ReduceTask, it seems that event.getTaskTrackerHttp() returns a malformed 
 URI, and so host from:
 {code}
 String host = u.getHost();
 {code}
 is evaluated to null and the NullPointerException is thrown afterwards in the 
 ConcurrentHashMap.
 I have written a patch to check for a null hostname condition when getHost is 
 called in the getMapCompletionEvents method and print an intelligible warning 
 message rather than suppressing it until later when it becomes confusing and 
 misleading.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4464) Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()

2012-09-27 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated MAPREDUCE-4464:
---

Issue Type: Improvement  (was: Bug)

 Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()
 -

 Key: MAPREDUCE-4464
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4464
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: task
Affects Versions: 1.0.0
Reporter: Clint Heath
Assignee: Clint Heath
Priority: Minor
 Fix For: 1.2.0

 Attachments: MAPREDUCE-4464_new.patch, MAPREDUCE-4464.patch, 
 MAPREDUCE-4464.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 If DNS does not resolve hostnames properly, reduce tasks can fail with a very 
 misleading exception.
 as per my peer Ahmed's diagnosis:
 In ReduceTask, it seems that event.getTaskTrackerHttp() returns a malformed 
 URI, and so host from:
 {code}
 String host = u.getHost();
 {code}
 is evaluated to null and the NullPointerException is thrown afterwards in the 
 ConcurrentHashMap.
 I have written a patch to check for a null hostname condition when getHost is 
 called in the getMapCompletionEvents method and print an intelligible warning 
 message rather than suppressing it until later when it becomes confusing and 
 misleading.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4464) Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()

2012-09-27 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated MAPREDUCE-4464:
---

   Resolution: Fixed
Fix Version/s: 1.2.0
   Status: Resolved  (was: Patch Available)

I've committed this to branch-1. Thanks very much for the report, your keen eye 
for issues, and your patch contributions Clint! Hope to see more in future!

 Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()
 -

 Key: MAPREDUCE-4464
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4464
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: task
Affects Versions: 1.0.0
Reporter: Clint Heath
Assignee: Clint Heath
Priority: Minor
 Fix For: 1.2.0

 Attachments: MAPREDUCE-4464_new.patch, MAPREDUCE-4464.patch, 
 MAPREDUCE-4464.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 If DNS does not resolve hostnames properly, reduce tasks can fail with a very 
 misleading exception.
 as per my peer Ahmed's diagnosis:
 In ReduceTask, it seems that event.getTaskTrackerHttp() returns a malformed 
 URI, and so host from:
 {code}
 String host = u.getHost();
 {code}
 is evaluated to null and the NullPointerException is thrown afterwards in the 
 ConcurrentHashMap.
 I have written a patch to check for a null hostname condition when getHost is 
 called in the getMapCompletionEvents method and print an intelligible warning 
 message rather than suppressing it until later when it becomes confusing and 
 misleading.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4464) Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()

2012-09-27 Thread Clint Heath (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464860#comment-13464860
 ] 

Clint Heath commented on MAPREDUCE-4464:


Thanks Harsh!  I look forward to contributing much more too

 Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()
 -

 Key: MAPREDUCE-4464
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4464
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: task
Affects Versions: 1.0.0
Reporter: Clint Heath
Assignee: Clint Heath
Priority: Minor
 Fix For: 1.2.0

 Attachments: MAPREDUCE-4464_new.patch, MAPREDUCE-4464.patch, 
 MAPREDUCE-4464.patch

   Original Estimate: 1h
  Remaining Estimate: 1h

 If DNS does not resolve hostnames properly, reduce tasks can fail with a very 
 misleading exception.
 as per my peer Ahmed's diagnosis:
 In ReduceTask, it seems that event.getTaskTrackerHttp() returns a malformed 
 URI, and so host from:
 {code}
 String host = u.getHost();
 {code}
 is evaluated to null and the NullPointerException is thrown afterwards in the 
 ConcurrentHashMap.
 I have written a patch to check for a null hostname condition when getHost is 
 called in the getMapCompletionEvents method and print an intelligible warning 
 message rather than suppressing it until later when it becomes confusing and 
 misleading.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4690) remove deprecated properties in the default configurations

2012-09-27 Thread Jianbin Wei (JIRA)
Jianbin Wei created MAPREDUCE-4690:
--

 Summary: remove deprecated properties in the default configurations
 Key: MAPREDUCE-4690
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4690
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: client
Affects Versions: 3.0.0
Reporter: Jianbin Wei
 Fix For: 3.0.0


We need to remove the deprecated properties included in the default 
configurations, such as core-default.xml and core-site.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4690) remove deprecated properties in the default configurations

2012-09-27 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464900#comment-13464900
 ] 

Jianbin Wei commented on MAPREDUCE-4690:


So far I found following

deprecated configuration file
--
mapreduce.job.counters.limit   mapred-default.xml


 remove deprecated properties in the default configurations
 --

 Key: MAPREDUCE-4690
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4690
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: client
Affects Versions: 3.0.0
Reporter: Jianbin Wei
 Fix For: 3.0.0


 We need to remove the deprecated properties included in the default 
 configurations, such as core-default.xml and core-site.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (MAPREDUCE-4690) remove deprecated properties in the default configurations

2012-09-27 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved MAPREDUCE-4690.


   Resolution: Duplicate
Fix Version/s: (was: 3.0.0)

Hi, please see MAPREDUCE-3223 for work done already on this. Let us carry it 
forward there for MR.

Closing out as dupe.

 remove deprecated properties in the default configurations
 --

 Key: MAPREDUCE-4690
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4690
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: client
Affects Versions: 3.0.0
Reporter: Jianbin Wei

 We need to remove the deprecated properties included in the default 
 configurations, such as core-default.xml and core-site.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4278) cannot run two local jobs in parallel from the same gateway.

2012-09-27 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13464965#comment-13464965
 ] 

Sandy Ryza commented on MAPREDUCE-4278:
---

If I understand correctly, the job configuration file is named after the job 
id, which the unique identifier would be a part of, so they would not clash.

 cannot run two local jobs in parallel from the same gateway.
 

 Key: MAPREDUCE-4278
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4278
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.205.0, 0.23.1, 1.0.2
Reporter: Araceli Henley

 I cannot run two local mode jobs from Pig in parallel from the same gateway, 
 this is a typical use case. If I re-run the tests sequentially, then the test 
 pass. This seems to be a problem from Hadoop.
 Additionally, the pig harness, expects to be able to run 
 Pig-version-undertest against Pig-version-stable from the same gateway.
 To replicate the error:
 I have two clusters running from the same gateway.
 If I run the Pig regression suites nightly.conf in local mode in paralell - 
 once on each cluster. Conflicts in M/R local mode result in failures in the 
 tests. 
 ERROR1:
 org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
 output/file.out in any of the configured local directories
 at
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
 at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
 at
 org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
 at org.apache.hadoop.mapred.Task.calculateOutputSize(Task.java:944)
 at org.apache.hadoop.mapred.Task.sendLastUpdate(Task.java:924)
 at org.apache.hadoop.mapred.Task.done(Task.java:875)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:374)
 ---
 ERROR2:
 2012-05-17 20:25:36,762 [main] INFO
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
 -
 HadoopJobId: job_local_0001
 2012-05-17 20:25:36,778 [Thread-3] INFO  org.apache.hadoop.mapred.Task -
 Using ResourceCalculatorPlugin : org.apache.
 hadoop.util.LinuxResourceCalculatorPlugin@ffa490e
 2012-05-17 20:25:36,837 [Thread-3] WARN
 org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
 java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getLoadFunc(PigInputFormat.java
 :153)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.createRecordReader(PigInputForm
 at.java:106)
 at
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.init(MapTask.java:489)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
 2012-05-17 20:25:41,291 [main] INFO
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4278) cannot run two local jobs in parallel from the same gateway.

2012-09-27 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated MAPREDUCE-4278:
--

Attachment: MAPREDUCE-4278-branch1.patch

 cannot run two local jobs in parallel from the same gateway.
 

 Key: MAPREDUCE-4278
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4278
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Araceli Henley
 Attachments: MAPREDUCE-4278-branch1.patch


 I cannot run two local mode jobs from Pig in parallel from the same gateway, 
 this is a typical use case. If I re-run the tests sequentially, then the test 
 pass. This seems to be a problem from Hadoop.
 Additionally, the pig harness, expects to be able to run 
 Pig-version-undertest against Pig-version-stable from the same gateway.
 To replicate the error:
 I have two clusters running from the same gateway.
 If I run the Pig regression suites nightly.conf in local mode in paralell - 
 once on each cluster. Conflicts in M/R local mode result in failures in the 
 tests. 
 ERROR1:
 org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
 output/file.out in any of the configured local directories
 at
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
 at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
 at
 org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
 at org.apache.hadoop.mapred.Task.calculateOutputSize(Task.java:944)
 at org.apache.hadoop.mapred.Task.sendLastUpdate(Task.java:924)
 at org.apache.hadoop.mapred.Task.done(Task.java:875)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:374)
 ---
 ERROR2:
 2012-05-17 20:25:36,762 [main] INFO
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
 -
 HadoopJobId: job_local_0001
 2012-05-17 20:25:36,778 [Thread-3] INFO  org.apache.hadoop.mapred.Task -
 Using ResourceCalculatorPlugin : org.apache.
 hadoop.util.LinuxResourceCalculatorPlugin@ffa490e
 2012-05-17 20:25:36,837 [Thread-3] WARN
 org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
 java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getLoadFunc(PigInputFormat.java
 :153)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.createRecordReader(PigInputForm
 at.java:106)
 at
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.init(MapTask.java:489)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
 2012-05-17 20:25:41,291 [main] INFO
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4278) cannot run two local jobs in parallel from the same gateway.

2012-09-27 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated MAPREDUCE-4278:
--

Affects Version/s: (was: 1.0.2)
   (was: 0.23.1)
   Status: Patch Available  (was: Open)

 cannot run two local jobs in parallel from the same gateway.
 

 Key: MAPREDUCE-4278
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4278
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Araceli Henley
 Attachments: MAPREDUCE-4278-branch1.patch


 I cannot run two local mode jobs from Pig in parallel from the same gateway, 
 this is a typical use case. If I re-run the tests sequentially, then the test 
 pass. This seems to be a problem from Hadoop.
 Additionally, the pig harness, expects to be able to run 
 Pig-version-undertest against Pig-version-stable from the same gateway.
 To replicate the error:
 I have two clusters running from the same gateway.
 If I run the Pig regression suites nightly.conf in local mode in paralell - 
 once on each cluster. Conflicts in M/R local mode result in failures in the 
 tests. 
 ERROR1:
 org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
 output/file.out in any of the configured local directories
 at
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
 at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
 at
 org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
 at org.apache.hadoop.mapred.Task.calculateOutputSize(Task.java:944)
 at org.apache.hadoop.mapred.Task.sendLastUpdate(Task.java:924)
 at org.apache.hadoop.mapred.Task.done(Task.java:875)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:374)
 ---
 ERROR2:
 2012-05-17 20:25:36,762 [main] INFO
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
 -
 HadoopJobId: job_local_0001
 2012-05-17 20:25:36,778 [Thread-3] INFO  org.apache.hadoop.mapred.Task -
 Using ResourceCalculatorPlugin : org.apache.
 hadoop.util.LinuxResourceCalculatorPlugin@ffa490e
 2012-05-17 20:25:36,837 [Thread-3] WARN
 org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
 java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getLoadFunc(PigInputFormat.java
 :153)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.createRecordReader(PigInputForm
 at.java:106)
 at
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.init(MapTask.java:489)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
 2012-05-17 20:25:41,291 [main] INFO
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4558) TestJobTrackerSafeMode is failing

2012-09-27 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465147#comment-13465147
 ] 

Matt Foley commented on MAPREDUCE-4558:
---

Accepted.

 TestJobTrackerSafeMode is failing
 -

 Key: MAPREDUCE-4558
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4558
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Fix For: 1.1.0

 Attachments: MR4558.txt


 MAPREDUCE-1906 exposed an issue with this unit test. It has 3 TTs running, 
 but has a check for the TT count to reach exactly 2 (which would be reached 
 with a higher heartbeat interval).
 The test ends up getting stuck, with the following message repeated multiple 
 times.
 {code}
 [junit] 2012-08-15 11:26:46,299 INFO  mapred.TestJobTrackerSafeMode 
 (TestJobTrackerSafeMode.java:checkTrackers(201)) - Waiting for Initialize all 
 Task Trackers
 [junit] 2012-08-15 11:26:47,301 INFO  mapred.TestJobTrackerSafeMode 
 (TestJobTrackerSafeMode.java:checkTrackers(201)) - Waiting for Initialize all 
 Task Trackers
 [junit] 2012-08-15 11:26:48,302 INFO  mapred.TestJobTrackerSafeMode 
 (TestJobTrackerSafeMode.java:checkTrackers(201)) - Waiting for Initialize all 
 Task Trackers
 [junit] 2012-08-15 11:26:49,303 INFO  mapred.TestJobTrackerSafeMode 
 (TestJobTrackerSafeMode.java:checkTrackers(201)) - Waiting for Initialize all 
 Task Trackers
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4328) Add the option to quiesce the JobTracker

2012-09-27 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465148#comment-13465148
 ] 

Matt Foley commented on MAPREDUCE-4328:
---

Accepted.

 Add the option to quiesce the JobTracker
 

 Key: MAPREDUCE-4328
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4328
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv1
Affects Versions: 1.0.3
Reporter: Arun C Murthy
Assignee: Arun C Murthy
 Fix For: 1.1.0

 Attachments: MAPREDUCE-4328.patch, MAPREDUCE-4328.patch, 
 TestJobTrackerQuiescence.java


 In several failure scenarios it would be very handy to have an option to 
 quiesce the JobTracker.
 Recently, we saw a case where the NameNode had to be rebooted at a customer 
 due to a random hardware failure - in such a case it would have been nice to 
 not lose jobs by quiescing the JobTracker.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4278) cannot run two local jobs in parallel from the same gateway.

2012-09-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465191#comment-13465191
 ] 

Hadoop QA commented on MAPREDUCE-4278:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12546910/MAPREDUCE-4278-branch1.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/2887//console

This message is automatically generated.

 cannot run two local jobs in parallel from the same gateway.
 

 Key: MAPREDUCE-4278
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4278
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Araceli Henley
 Attachments: MAPREDUCE-4278-branch1.patch


 I cannot run two local mode jobs from Pig in parallel from the same gateway, 
 this is a typical use case. If I re-run the tests sequentially, then the test 
 pass. This seems to be a problem from Hadoop.
 Additionally, the pig harness, expects to be able to run 
 Pig-version-undertest against Pig-version-stable from the same gateway.
 To replicate the error:
 I have two clusters running from the same gateway.
 If I run the Pig regression suites nightly.conf in local mode in paralell - 
 once on each cluster. Conflicts in M/R local mode result in failures in the 
 tests. 
 ERROR1:
 org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find
 output/file.out in any of the configured local directories
 at
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathToRead(LocalDirAllocator.java:429)
 at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathToRead(LocalDirAllocator.java:160)
 at
 org.apache.hadoop.mapred.MapOutputFile.getOutputFile(MapOutputFile.java:56)
 at org.apache.hadoop.mapred.Task.calculateOutputSize(Task.java:944)
 at org.apache.hadoop.mapred.Task.sendLastUpdate(Task.java:924)
 at org.apache.hadoop.mapred.Task.done(Task.java:875)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:374)
 ---
 ERROR2:
 2012-05-17 20:25:36,762 [main] INFO
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
 -
 HadoopJobId: job_local_0001
 2012-05-17 20:25:36,778 [Thread-3] INFO  org.apache.hadoop.mapred.Task -
 Using ResourceCalculatorPlugin : org.apache.
 hadoop.util.LinuxResourceCalculatorPlugin@ffa490e
 2012-05-17 20:25:36,837 [Thread-3] WARN
 org.apache.hadoop.mapred.LocalJobRunner - job_local_0001
 java.lang.IndexOutOfBoundsException: Index: 1, Size: 1
 at java.util.ArrayList.RangeCheck(ArrayList.java:547)
 at java.util.ArrayList.get(ArrayList.java:322)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getLoadFunc(PigInputFormat.java
 :153)
 at
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.createRecordReader(PigInputForm
 at.java:106)
 at
 org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.init(MapTask.java:489)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:731)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
 at
 org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
 2012-05-17 20:25:41,291 [main] INFO
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-4691) Historyserver can report Unknown job after RM says job has completed.

2012-09-27 Thread Jason Lowe (JIRA)
Jason Lowe created MAPREDUCE-4691:
-

 Summary: Historyserver can report Unknown job after RM says job 
has completed.
 Key: MAPREDUCE-4691
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4691
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mrv2
Affects Versions: 2.0.1-alpha, 0.23.3
Reporter: Jason Lowe
Priority: Critical


Example traceback from the client:

{noformat}
2012-09-27 20:28:38,068 [main] INFO  
org.apache.hadoop.mapred.ClientServiceDelegate - Application state is 
completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
2012-09-27 20:28:38,530 [main] WARN  
org.apache.hadoop.mapred.ClientServiceDelegate - Error from remote end: Unknown 
job job_1348097917603_3019
2012-09-27 20:28:38,530 [main] ERROR 
org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException 
as:xxx (auth:KERBEROS) 
cause:org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl: 
Unknown job job_1348097917603_3019
2012-09-27 20:28:38,531 [main] WARN  org.apache.pig.tools.pigstats.JobStats - 
Failed to get map task report
RemoteTrace: 
 at LocalTrace: 
org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl: 
Unknown job job_1348097917603_3019
at 
org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:156)
at $Proxy11.getJobReport(Unknown Source)
at 
org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:116)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:298)
at 
org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:383)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:482)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:184)
...
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4691) Historyserver can report Unknown job after RM says job has completed

2012-09-27 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated MAPREDUCE-4691:
--

Summary: Historyserver can report Unknown job after RM says job has 
completed  (was: Historyserver can report Unknown job after RM says job has 
completed.)

There is a race condition in the historyserver where two threads can be trying 
to scan the same user's done intermediate directory for two separate jobs.  One 
thread will win the race and update the user timestamp in 
{{HistoryFileManager.scanIntermediateDirectory}} *before* it has actually 
completed the scan.  The second thread will then see the timestamp has been 
updated, think there's no point in doing a scan, and return with no job found.

 Historyserver can report Unknown job after RM says job has completed
 --

 Key: MAPREDUCE-4691
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4691
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver, mrv2
Affects Versions: 0.23.3, 2.0.1-alpha
Reporter: Jason Lowe
Priority: Critical

 Example traceback from the client:
 {noformat}
 2012-09-27 20:28:38,068 [main] INFO  
 org.apache.hadoop.mapred.ClientServiceDelegate - Application state is 
 completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
 2012-09-27 20:28:38,530 [main] WARN  
 org.apache.hadoop.mapred.ClientServiceDelegate - Error from remote end: 
 Unknown job job_1348097917603_3019
 2012-09-27 20:28:38,530 [main] ERROR 
 org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException 
 as:xxx (auth:KERBEROS) 
 cause:org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl: 
 Unknown job job_1348097917603_3019
 2012-09-27 20:28:38,531 [main] WARN  org.apache.pig.tools.pigstats.JobStats - 
 Failed to get map task report
 RemoteTrace: 
  at LocalTrace: 
 org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl: 
 Unknown job job_1348097917603_3019
 at 
 org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:156)
 at $Proxy11.getJobReport(Unknown Source)
 at 
 org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:116)
 at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:298)
 at 
 org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:383)
 at 
 org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:482)
 at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:184)
 ...
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-4655) MergeManager.reserve can OutOfMemoryError if more than 10% of max memory is used on non-MapOutputs

2012-09-27 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13465215#comment-13465215
 ] 

Sandy Ryza commented on MAPREDUCE-4655:
---

Looked at a heap dump, and it appears that the problem was caused by Avro 
holding on to a reference after it was done with it.  Filed AVRO-1175.

 MergeManager.reserve can OutOfMemoryError if more than 10% of max memory is 
 used on non-MapOutputs
 --

 Key: MAPREDUCE-4655
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4655
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.1-alpha
Reporter: Sandy Ryza

 The MergeManager does a memory check, using a limit that defaults to 90% of 
 Runtime.getRuntime().maxMemory(). Allocations that would bring the total 
 memory allocated by the MergeManager over this limit are asked to wait until 
 memory frees up. Disk is used for single allocations that would be over 25% 
 of the memory limit.
 If some other part of the reducer were to be using more than 10% of the 
 memory. the current check wouldn't stop an OutOfMemoryError.
 Before creating an in-memory MapOutput, a check can be done using 
 Runtime.getRuntime().freeMemory(), waiting until memory is freed up if it 
 fails.
 12/08/17 10:36:29 INFO mapreduce.Job: Task Id : 
 attempt_1342723342632_0010_r_05_0, Status : FAILED 
 Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
 shuffle in fetcher#6 
 at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:123) 
 at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:371) 
 at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:152) 
 at java.security.AccessController.doPrivileged(Native Method) 
 at javax.security.auth.Subject.doAs(Subject.java:416) 
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
  
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:147) 
 Caused by: java.lang.OutOfMemoryError: Java heap space 
 at 
 org.apache.hadoop.io.BoundedByteArrayOutputStream.init(BoundedByteArrayOutputStream.java:58)
  
 at 
 org.apache.hadoop.io.BoundedByteArrayOutputStream.init(BoundedByteArrayOutputStream.java:45)
  
 at 
 org.apache.hadoop.mapreduce.task.reduce.MapOutput.init(MapOutput.java:97) 
 at 
 org.apache.hadoop.mapreduce.task.reduce.MergeManager.unconditionalReserve(MergeManager.java:286)
  
 at 
 org.apache.hadoop.mapreduce.task.reduce.MergeManager.reserve(MergeManager.java:276)
  
 at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:327)
  
 at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:273)
  
 at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:153)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira