[jira] [Commented] (MAPREDUCE-4742) Fix typo in nnbench#displayUsage

2015-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357041#comment-14357041
 ] 

Hudson commented on MAPREDUCE-4742:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2079 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2079/])
MAPREDUCE-4742. Fix typo in nnbench#displayUsage. Contributed by Liang Xie. 
(ozawa: rev 20b8ee1350e62d1b21c951e653302b6e6a8e4f7e)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
* hadoop-mapreduce-project/CHANGES.txt


 Fix typo in nnbench#displayUsage
 

 Key: MAPREDUCE-4742
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4742
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha, 0.23.4, 2.6.0
Reporter: Liang Xie
Priority: Trivial
 Fix For: 2.8.0

 Attachments: MAPREDUCE-4742.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4742) Fix typo in nnbench#displayUsage

2015-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357012#comment-14357012
 ] 

Hudson commented on MAPREDUCE-4742:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #129 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/129/])
MAPREDUCE-4742. Fix typo in nnbench#displayUsage. Contributed by Liang Xie. 
(ozawa: rev 20b8ee1350e62d1b21c951e653302b6e6a8e4f7e)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java


 Fix typo in nnbench#displayUsage
 

 Key: MAPREDUCE-4742
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4742
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha, 0.23.4, 2.6.0
Reporter: Liang Xie
Priority: Trivial
 Fix For: 2.8.0

 Attachments: MAPREDUCE-4742.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4815) Speed up FileOutputCommitter#commitJob for many output files

2015-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357014#comment-14357014
 ] 

Hudson commented on MAPREDUCE-4815:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #129 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/129/])
MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output files. 
(Siqi Li via gera) (gera: rev aa92b764a7ddb888d097121c4d610089a0053d11)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestFileOutputCommitter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
* hadoop-mapreduce-project/CHANGES.txt


 Speed up FileOutputCommitter#commitJob for many output files
 

 Key: MAPREDUCE-4815
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4815
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.3, 2.0.1-alpha, 2.4.1
Reporter: Jason Lowe
Assignee: Siqi Li
  Labels: perfomance
 Fix For: 2.7.0

 Attachments: MAPREDUCE-4815.v10.patch, MAPREDUCE-4815.v11.patch, 
 MAPREDUCE-4815.v12.patch, MAPREDUCE-4815.v13.patch, MAPREDUCE-4815.v14.patch, 
 MAPREDUCE-4815.v15.patch, MAPREDUCE-4815.v16.patch, MAPREDUCE-4815.v17.patch, 
 MAPREDUCE-4815.v3.patch, MAPREDUCE-4815.v4.patch, MAPREDUCE-4815.v5.patch, 
 MAPREDUCE-4815.v6.patch, MAPREDUCE-4815.v7.patch, MAPREDUCE-4815.v8.patch, 
 MAPREDUCE-4815.v9.patch


 If a job generates many files to commit then the commitJob method call at the 
 end of the job can take minutes.  This is a performance regression from 1.x, 
 as 1.x had the tasks commit directly to the final output directory as they 
 were completing and commitJob had very little to do.  The commit work was 
 processed in parallel and overlapped the processing of outstanding tasks.  In 
 0.23/2.x, the commit is single-threaded and waits until all tasks have 
 completed before commencing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4683) We need to fix our build to create/distribute hadoop-mapreduce-client-core-tests.jar

2015-03-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356982#comment-14356982
 ] 

Hadoop QA commented on MAPREDUCE-4683:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12669891/MAPREDUCE-4683.patch
  against trunk revision 30c428a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5281//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5281//console

This message is automatically generated.

 We need to fix our build to create/distribute 
 hadoop-mapreduce-client-core-tests.jar
 

 Key: MAPREDUCE-4683
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4683
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: build
Reporter: Arun C Murthy
Assignee: Akira AJISAKA
Priority: Critical
 Attachments: MAPREDUCE-4683.patch


 We need to fix our build to create/distribute 
 hadoop-mapreduce-client-core-tests.jar, need this before MAPREDUCE-4253



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4815) Speed up FileOutputCommitter#commitJob for many output files

2015-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357043#comment-14357043
 ] 

Hudson commented on MAPREDUCE-4815:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2079 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2079/])
MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output files. 
(Siqi Li via gera) (gera: rev aa92b764a7ddb888d097121c4d610089a0053d11)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestFileOutputCommitter.java


 Speed up FileOutputCommitter#commitJob for many output files
 

 Key: MAPREDUCE-4815
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4815
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.3, 2.0.1-alpha, 2.4.1
Reporter: Jason Lowe
Assignee: Siqi Li
  Labels: perfomance
 Fix For: 2.7.0

 Attachments: MAPREDUCE-4815.v10.patch, MAPREDUCE-4815.v11.patch, 
 MAPREDUCE-4815.v12.patch, MAPREDUCE-4815.v13.patch, MAPREDUCE-4815.v14.patch, 
 MAPREDUCE-4815.v15.patch, MAPREDUCE-4815.v16.patch, MAPREDUCE-4815.v17.patch, 
 MAPREDUCE-4815.v3.patch, MAPREDUCE-4815.v4.patch, MAPREDUCE-4815.v5.patch, 
 MAPREDUCE-4815.v6.patch, MAPREDUCE-4815.v7.patch, MAPREDUCE-4815.v8.patch, 
 MAPREDUCE-4815.v9.patch


 If a job generates many files to commit then the commitJob method call at the 
 end of the job can take minutes.  This is a performance regression from 1.x, 
 as 1.x had the tasks commit directly to the final output directory as they 
 were completing and commitJob had very little to do.  The commit work was 
 processed in parallel and overlapped the processing of outstanding tasks.  In 
 0.23/2.x, the commit is single-threaded and waits until all tasks have 
 completed before commencing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-5560) org.apache.hadoop.mapreduce.v2.app.commit.TestCommitterEventHandler failing on trunk

2015-03-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-5560.
-
Resolution: Cannot Reproduce

stale

 org.apache.hadoop.mapreduce.v2.app.commit.TestCommitterEventHandler failing 
 on trunk
 

 Key: MAPREDUCE-5560
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5560
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Cindy Li
Priority: Critical
 Attachments: 
 org.apache.hadoop.mapreduce.v2.app.commit.TestCommitterEventHandler-output.txt


 Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.406 sec  
 FAILURE! - in 
 org.apache.hadoop.mapreduce.v2.app.commit.TestCommitterEventHandler
 testBasic(org.apache.hadoop.mapreduce.v2.app.commit.TestCommitterEventHandler)
   Time elapsed: 0.185 sec   FAILURE!
 java.lang.AssertionError: null
   at org.junit.Assert.fail(Assert.java:92)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertNotNull(Assert.java:526)
   at org.junit.Assert.assertNotNull(Assert.java:537)
   at 
 org.apache.hadoop.mapreduce.v2.app.commit.TestCommitterEventHandler.testBasic(TestCommitterEventHandler.java:263)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4363) Hadoop 1.X, 2.X and trunk do not build on Fedora 17

2015-03-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4363:

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

 Hadoop 1.X, 2.X and trunk do not build on Fedora 17
 ---

 Key: MAPREDUCE-4363
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4363
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: build, pipes
Affects Versions: 1.0.3
Reporter: Bruno Mahé
Assignee: Bruno Mahé
  Labels: bigtop
 Attachments: MAPREDUCE-4363-trunk.patch, MAPREDUCE-4363.patch


 I upgraded my machine to the latest Fedora 17 and now Apache Hadoop is 
 failing to build. This seems related to the bump in version of gcc to 4.7.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-5552) org.apache.hadoop.mapred.TestJobCleanup failing on trunk

2015-03-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-5552.
-
Resolution: Cannot Reproduce

stale

 org.apache.hadoop.mapred.TestJobCleanup failing on trunk
 

 Key: MAPREDUCE-5552
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5552
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Omkar Vinit Joshi
Priority: Blocker

 Running org.apache.hadoop.mapred.TestJobCleanup
 Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 138.031 sec 
  FAILURE! - in org.apache.hadoop.mapred.TestJobCleanup
 testDefaultCleanupAndAbort(org.apache.hadoop.mapred.TestJobCleanup)  Time 
 elapsed: 25.522 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.mapred.TestJobCleanup.testFailedJob(TestJobCleanup.java:199)
   at 
 org.apache.hadoop.mapred.TestJobCleanup.testDefaultCleanupAndAbort(TestJobCleanup.java:275)
 testCustomAbort(org.apache.hadoop.mapred.TestJobCleanup)  Time elapsed: 
 31.755 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.mapred.TestJobCleanup.testFailedJob(TestJobCleanup.java:199)
   at 
 org.apache.hadoop.mapred.TestJobCleanup.testCustomAbort(TestJobCleanup.java:296)
 testCustomCleanup(org.apache.hadoop.mapred.TestJobCleanup)  Time elapsed: 
 52.086 sec   ERROR!
 java.lang.NullPointerException: null
   at 
 org.apache.hadoop.mapred.TestJobCleanup.testFailedJob(TestJobCleanup.java:199)
   at 
 org.apache.hadoop.mapred.TestJobCleanup.testCustomCleanup(TestJobCleanup.java:319)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-2060) IOException: Filesystem closed on submitJob

2015-03-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-2060.
-
Resolution: Won't Fix

stale/won't fix. 0.22 is dead

 IOException: Filesystem closed on submitJob
 ---

 Key: MAPREDUCE-2060
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2060
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 0.22.0
 Environment: 
 https://svn.apache.org/repos/asf/hadoop/mapreduce/trunk@994941
 https://svn.apache.org/repos/asf/hadoop/hdfs/trunk@993542
Reporter: Dan Adkins

 I get the following strange error on the jobtracker when attempting to submit 
 a job:
 10/09/09 20:31:35 INFO ipc.Server: IPC Server handler 7 on 31000, call 
 submitJob(job_201009092028_0001, 
 hdfs://hns4.sea1.qc:21000/tmp/hadoop-mr20/mapred/staging/dadkins/.staging/job_201009092028_0001,
  org.apache.hadoop.security.Credentials@20c87621) from 10.128.130.145:49253: 
 error: java.io.IOException: Filesystem closed
 java.io.IOException: Filesystem closed
   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:307)
   at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1212)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:494)
   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1491)
   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:395)
   at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3078)
   at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3014)
   at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:2996)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:349)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1380)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1376)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1105)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1374)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4253) Tests for mapreduce-client-core are lying under mapreduce-client-jobclient

2015-03-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4253:

Issue Type: Test  (was: Task)

 Tests for mapreduce-client-core are lying under mapreduce-client-jobclient
 --

 Key: MAPREDUCE-4253
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4253
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: client
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Tsuyoshi Ozawa
 Attachments: MR-4253.1.patch, MR-4253.2.patch, 
 crossing_project_checker.rb, result.txt


 Many of the tests for client libs from mapreduce-client-core are lying under 
 mapreduce-client-jobclient.
 We should investigate if this is the right thing to do and if not, move the 
 tests back into client-core.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-3504) capacity scheduler allow capacity greater then 100% as long as its less then 101%

2015-03-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-3504:

Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Stale and likely no longer relevant for newer releases.

 capacity scheduler allow capacity greater then 100% as long as its less then 
 101%
 -

 Key: MAPREDUCE-3504
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3504
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: capacity-sched
Affects Versions: 0.20.205.0, 1.0.0
Reporter: Thomas Graves
 Attachments: MAPREDUCE-3504-1.patch, MAPREDUCE-3504.patch


 When sum of all capacities =101 or 100, we got the following error when 
 starting jobtracker. However, when the 100 = sum  101, jobtracker does not 
 report exception and started with all queues initialized.
 for instance (capacity sum = 29.5+60+11.4 = 100.9) does not cause exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5549) distcp app should fail if m/r job fails

2015-03-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14357822#comment-14357822
 ] 

Hadoop QA commented on MAPREDUCE-5549:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12653802/MAPREDUCE-5549-002.patch
  against trunk revision 7a346bc.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5282//console

This message is automatically generated.

 distcp app should fail if m/r job fails
 ---

 Key: MAPREDUCE-5549
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5549
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: distcp, mrv2
Affects Versions: 3.0.0
Reporter: David Rosenstrauch
 Attachments: MAPREDUCE-5549-001.patch, MAPREDUCE-5549-002.patch


 I run distcpv2 in a scripted manner.  The script checks if the distcp step 
 fails and, if so, aborts the rest of the script.  However, I ran into an 
 issue today where the distcp job failed, but my calling script went on its 
 merry way.
 Digging into the code a bit more (at 
 https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java),
  I think I see the issue:  the distcp app is not returning an error exit code 
 to the shell when the distcp job fails.  This is a big problem, IMO, as it 
 prevents distcp from being successfully used in a scripted environment.  IMO, 
 the code should change like so:
 Before:
 {code:title=org.apache.hadoop.tools.DistCp.java}
 //...
   public int run(String[] argv) {
 //...
 try {
   execute();
 } catch (InvalidInputException e) {
   LOG.error(Invalid input: , e);
   return DistCpConstants.INVALID_ARGUMENT;
 } catch (DuplicateFileException e) {
   LOG.error(Duplicate files in input path: , e);
   return DistCpConstants.DUPLICATE_INPUT;
 } catch (Exception e) {
   LOG.error(Exception encountered , e);
   return DistCpConstants.UNKNOWN_ERROR;
 }
 return DistCpConstants.SUCCESS;
   }
 //...
 {code}
 After:
 {code:title=org.apache.hadoop.tools.DistCp.java}
 //...
   public int run(String[] argv) {
 //...
 Job job = null;
 try {
   job = execute();
 } catch (InvalidInputException e) {
   LOG.error(Invalid input: , e);
   return DistCpConstants.INVALID_ARGUMENT;
 } catch (DuplicateFileException e) {
   LOG.error(Duplicate files in input path: , e);
   return DistCpConstants.DUPLICATE_INPUT;
 } catch (Exception e) {
   LOG.error(Exception encountered , e);
   return DistCpConstants.UNKNOWN_ERROR;
 }
 if (job.isSuccessful()) {
   return DistCpConstants.SUCCESS;
 }
 else {
   return DistCpConstants.UNKNOWN_ERROR;
 }
   }
 //...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5549) distcp app should fail if m/r job fails

2015-03-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5549:

Target Version/s: 2.6.0, 3.0.0  (was: 3.0.0, 2.6.0)
  Status: Patch Available  (was: Open)

 distcp app should fail if m/r job fails
 ---

 Key: MAPREDUCE-5549
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5549
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: distcp, mrv2
Affects Versions: 3.0.0
Reporter: David Rosenstrauch
 Attachments: MAPREDUCE-5549-001.patch, MAPREDUCE-5549-002.patch


 I run distcpv2 in a scripted manner.  The script checks if the distcp step 
 fails and, if so, aborts the rest of the script.  However, I ran into an 
 issue today where the distcp job failed, but my calling script went on its 
 merry way.
 Digging into the code a bit more (at 
 https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java),
  I think I see the issue:  the distcp app is not returning an error exit code 
 to the shell when the distcp job fails.  This is a big problem, IMO, as it 
 prevents distcp from being successfully used in a scripted environment.  IMO, 
 the code should change like so:
 Before:
 {code:title=org.apache.hadoop.tools.DistCp.java}
 //...
   public int run(String[] argv) {
 //...
 try {
   execute();
 } catch (InvalidInputException e) {
   LOG.error(Invalid input: , e);
   return DistCpConstants.INVALID_ARGUMENT;
 } catch (DuplicateFileException e) {
   LOG.error(Duplicate files in input path: , e);
   return DistCpConstants.DUPLICATE_INPUT;
 } catch (Exception e) {
   LOG.error(Exception encountered , e);
   return DistCpConstants.UNKNOWN_ERROR;
 }
 return DistCpConstants.SUCCESS;
   }
 //...
 {code}
 After:
 {code:title=org.apache.hadoop.tools.DistCp.java}
 //...
   public int run(String[] argv) {
 //...
 Job job = null;
 try {
   job = execute();
 } catch (InvalidInputException e) {
   LOG.error(Invalid input: , e);
   return DistCpConstants.INVALID_ARGUMENT;
 } catch (DuplicateFileException e) {
   LOG.error(Duplicate files in input path: , e);
   return DistCpConstants.DUPLICATE_INPUT;
 } catch (Exception e) {
   LOG.error(Exception encountered , e);
   return DistCpConstants.UNKNOWN_ERROR;
 }
 if (job.isSuccessful()) {
   return DistCpConstants.SUCCESS;
 }
 else {
   return DistCpConstants.UNKNOWN_ERROR;
 }
   }
 //...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5549) distcp app should fail if m/r job fails

2015-03-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5549:

Target Version/s: 2.6.0, 3.0.0  (was: 3.0.0, 2.6.0)
  Status: Open  (was: Patch Available)

 distcp app should fail if m/r job fails
 ---

 Key: MAPREDUCE-5549
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5549
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: distcp, mrv2
Affects Versions: 3.0.0
Reporter: David Rosenstrauch
 Attachments: MAPREDUCE-5549-001.patch, MAPREDUCE-5549-002.patch


 I run distcpv2 in a scripted manner.  The script checks if the distcp step 
 fails and, if so, aborts the rest of the script.  However, I ran into an 
 issue today where the distcp job failed, but my calling script went on its 
 merry way.
 Digging into the code a bit more (at 
 https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java),
  I think I see the issue:  the distcp app is not returning an error exit code 
 to the shell when the distcp job fails.  This is a big problem, IMO, as it 
 prevents distcp from being successfully used in a scripted environment.  IMO, 
 the code should change like so:
 Before:
 {code:title=org.apache.hadoop.tools.DistCp.java}
 //...
   public int run(String[] argv) {
 //...
 try {
   execute();
 } catch (InvalidInputException e) {
   LOG.error(Invalid input: , e);
   return DistCpConstants.INVALID_ARGUMENT;
 } catch (DuplicateFileException e) {
   LOG.error(Duplicate files in input path: , e);
   return DistCpConstants.DUPLICATE_INPUT;
 } catch (Exception e) {
   LOG.error(Exception encountered , e);
   return DistCpConstants.UNKNOWN_ERROR;
 }
 return DistCpConstants.SUCCESS;
   }
 //...
 {code}
 After:
 {code:title=org.apache.hadoop.tools.DistCp.java}
 //...
   public int run(String[] argv) {
 //...
 Job job = null;
 try {
   job = execute();
 } catch (InvalidInputException e) {
   LOG.error(Invalid input: , e);
   return DistCpConstants.INVALID_ARGUMENT;
 } catch (DuplicateFileException e) {
   LOG.error(Duplicate files in input path: , e);
   return DistCpConstants.DUPLICATE_INPUT;
 } catch (Exception e) {
   LOG.error(Exception encountered , e);
   return DistCpConstants.UNKNOWN_ERROR;
 }
 if (job.isSuccessful()) {
   return DistCpConstants.SUCCESS;
 }
 else {
   return DistCpConstants.UNKNOWN_ERROR;
 }
   }
 //...
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4742) Fix typo in nnbench#displayUsage

2015-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356760#comment-14356760
 ] 

Hudson commented on MAPREDUCE-4742:
---

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #129 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/129/])
MAPREDUCE-4742. Fix typo in nnbench#displayUsage. Contributed by Liang Xie. 
(ozawa: rev 20b8ee1350e62d1b21c951e653302b6e6a8e4f7e)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java


 Fix typo in nnbench#displayUsage
 

 Key: MAPREDUCE-4742
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4742
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha, 0.23.4, 2.6.0
Reporter: Liang Xie
Priority: Trivial
 Fix For: 2.8.0

 Attachments: MAPREDUCE-4742.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4815) Speed up FileOutputCommitter#commitJob for many output files

2015-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356762#comment-14356762
 ] 

Hudson commented on MAPREDUCE-4815:
---

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #129 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/129/])
MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output files. 
(Siqi Li via gera) (gera: rev aa92b764a7ddb888d097121c4d610089a0053d11)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestFileOutputCommitter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
* hadoop-mapreduce-project/CHANGES.txt


 Speed up FileOutputCommitter#commitJob for many output files
 

 Key: MAPREDUCE-4815
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4815
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.3, 2.0.1-alpha, 2.4.1
Reporter: Jason Lowe
Assignee: Siqi Li
  Labels: perfomance
 Fix For: 2.7.0

 Attachments: MAPREDUCE-4815.v10.patch, MAPREDUCE-4815.v11.patch, 
 MAPREDUCE-4815.v12.patch, MAPREDUCE-4815.v13.patch, MAPREDUCE-4815.v14.patch, 
 MAPREDUCE-4815.v15.patch, MAPREDUCE-4815.v16.patch, MAPREDUCE-4815.v17.patch, 
 MAPREDUCE-4815.v3.patch, MAPREDUCE-4815.v4.patch, MAPREDUCE-4815.v5.patch, 
 MAPREDUCE-4815.v6.patch, MAPREDUCE-4815.v7.patch, MAPREDUCE-4815.v8.patch, 
 MAPREDUCE-4815.v9.patch


 If a job generates many files to commit then the commitJob method call at the 
 end of the job can take minutes.  This is a performance regression from 1.x, 
 as 1.x had the tasks commit directly to the final output directory as they 
 were completing and commitJob had very little to do.  The commit work was 
 processed in parallel and overlapped the processing of outstanding tasks.  In 
 0.23/2.x, the commit is single-threaded and waits until all tasks have 
 completed before commencing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4742) Fix typo in nnbench#displayUsage

2015-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356768#comment-14356768
 ] 

Hudson commented on MAPREDUCE-4742:
---

FAILURE: Integrated in Hadoop-Yarn-trunk #863 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/863/])
MAPREDUCE-4742. Fix typo in nnbench#displayUsage. Contributed by Liang Xie. 
(ozawa: rev 20b8ee1350e62d1b21c951e653302b6e6a8e4f7e)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
* hadoop-mapreduce-project/CHANGES.txt


 Fix typo in nnbench#displayUsage
 

 Key: MAPREDUCE-4742
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4742
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha, 0.23.4, 2.6.0
Reporter: Liang Xie
Priority: Trivial
 Fix For: 2.8.0

 Attachments: MAPREDUCE-4742.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4815) Speed up FileOutputCommitter#commitJob for many output files

2015-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356770#comment-14356770
 ] 

Hudson commented on MAPREDUCE-4815:
---

FAILURE: Integrated in Hadoop-Yarn-trunk #863 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/863/])
MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output files. 
(Siqi Li via gera) (gera: rev aa92b764a7ddb888d097121c4d610089a0053d11)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestFileOutputCommitter.java
* hadoop-mapreduce-project/CHANGES.txt


 Speed up FileOutputCommitter#commitJob for many output files
 

 Key: MAPREDUCE-4815
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4815
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.3, 2.0.1-alpha, 2.4.1
Reporter: Jason Lowe
Assignee: Siqi Li
  Labels: perfomance
 Fix For: 2.7.0

 Attachments: MAPREDUCE-4815.v10.patch, MAPREDUCE-4815.v11.patch, 
 MAPREDUCE-4815.v12.patch, MAPREDUCE-4815.v13.patch, MAPREDUCE-4815.v14.patch, 
 MAPREDUCE-4815.v15.patch, MAPREDUCE-4815.v16.patch, MAPREDUCE-4815.v17.patch, 
 MAPREDUCE-4815.v3.patch, MAPREDUCE-4815.v4.patch, MAPREDUCE-4815.v5.patch, 
 MAPREDUCE-4815.v6.patch, MAPREDUCE-4815.v7.patch, MAPREDUCE-4815.v8.patch, 
 MAPREDUCE-4815.v9.patch


 If a job generates many files to commit then the commitJob method call at the 
 end of the job can take minutes.  This is a performance regression from 1.x, 
 as 1.x had the tasks commit directly to the final output directory as they 
 were completing and commitJob had very little to do.  The commit work was 
 processed in parallel and overlapped the processing of outstanding tasks.  In 
 0.23/2.x, the commit is single-threaded and waits until all tasks have 
 completed before commencing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-210) want InputFormat for zip files

2015-03-11 Thread Hari Sekhon (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356797#comment-14356797
 ] 

Hari Sekhon commented on MAPREDUCE-210:
---

There is 3rd party zip inputformat here:

http://cotdp.com/2012/07/hadoop-processing-zip-files-in-mapreduce/

I think it's important for the zip inputformat to be natively supported because 
traditional enterprises where Hadoop is now starting to penetrate use zip a 
lot, especially in large corporates which are Windows heavy and don't realize 
the problems they are causing by having so many things in zip files that Hadoop 
currently can't read.

 want InputFormat for zip files
 --

 Key: MAPREDUCE-210
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-210
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
Reporter: Doug Cutting
Assignee: indrajit
 Attachments: ZipInputFormat_fixed.patch


 HDFS is inefficient with large numbers of small files.  Thus one might pack 
 many small files into large, compressed, archives.  But, for efficient 
 map-reduce operation, it is desireable to be able to split inputs into 
 smaller chunks, with one or more small original file per split.  The zip 
 format, unlike tar, permits enumeration of files in the archive without 
 scanning the entire archive.  Thus a zip InputFormat could efficiently permit 
 splitting large archives into splits that contain one or more archived files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4815) Speed up FileOutputCommitter#commitJob for many output files

2015-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356947#comment-14356947
 ] 

Hudson commented on MAPREDUCE-4815:
---

FAILURE: Integrated in Hadoop-Hdfs-trunk #2061 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2061/])
MAPREDUCE-4815. Speed up FileOutputCommitter#commitJob for many output files. 
(Siqi Li via gera) (gera: rev aa92b764a7ddb888d097121c4d610089a0053d11)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputCommitter.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestFileOutputCommitter.java
* hadoop-mapreduce-project/CHANGES.txt


 Speed up FileOutputCommitter#commitJob for many output files
 

 Key: MAPREDUCE-4815
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4815
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mrv2
Affects Versions: 0.23.3, 2.0.1-alpha, 2.4.1
Reporter: Jason Lowe
Assignee: Siqi Li
  Labels: perfomance
 Fix For: 2.7.0

 Attachments: MAPREDUCE-4815.v10.patch, MAPREDUCE-4815.v11.patch, 
 MAPREDUCE-4815.v12.patch, MAPREDUCE-4815.v13.patch, MAPREDUCE-4815.v14.patch, 
 MAPREDUCE-4815.v15.patch, MAPREDUCE-4815.v16.patch, MAPREDUCE-4815.v17.patch, 
 MAPREDUCE-4815.v3.patch, MAPREDUCE-4815.v4.patch, MAPREDUCE-4815.v5.patch, 
 MAPREDUCE-4815.v6.patch, MAPREDUCE-4815.v7.patch, MAPREDUCE-4815.v8.patch, 
 MAPREDUCE-4815.v9.patch


 If a job generates many files to commit then the commitJob method call at the 
 end of the job can take minutes.  This is a performance regression from 1.x, 
 as 1.x had the tasks commit directly to the final output directory as they 
 were completing and commitJob had very little to do.  The commit work was 
 processed in parallel and overlapped the processing of outstanding tasks.  In 
 0.23/2.x, the commit is single-threaded and waits until all tasks have 
 completed before commencing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4742) Fix typo in nnbench#displayUsage

2015-03-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356945#comment-14356945
 ] 

Hudson commented on MAPREDUCE-4742:
---

FAILURE: Integrated in Hadoop-Hdfs-trunk #2061 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2061/])
MAPREDUCE-4742. Fix typo in nnbench#displayUsage. Contributed by Liang Xie. 
(ozawa: rev 20b8ee1350e62d1b21c951e653302b6e6a8e4f7e)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/hdfs/NNBench.java
* hadoop-mapreduce-project/CHANGES.txt


 Fix typo in nnbench#displayUsage
 

 Key: MAPREDUCE-4742
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4742
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha, 0.23.4, 2.6.0
Reporter: Liang Xie
Priority: Trivial
 Fix For: 2.8.0

 Attachments: MAPREDUCE-4742.txt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4683) We need to fix our build to create/distribute hadoop-mapreduce-client-core-tests.jar

2015-03-11 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated MAPREDUCE-4683:
-
Description: We need to fix our build to create/distribute 
hadoop-mapreduce-client-core-tests.jar, need this before MAPREDUCE-4253  (was: 
We need to fix our build to create/distribute 
hadoop-mapreduce-client-core-tests.jar, need this after MAPREDUCE-4253)

 We need to fix our build to create/distribute 
 hadoop-mapreduce-client-core-tests.jar
 

 Key: MAPREDUCE-4683
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4683
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: build
Reporter: Arun C Murthy
Assignee: Akira AJISAKA
Priority: Critical
 Attachments: MAPREDUCE-4683.patch


 We need to fix our build to create/distribute 
 hadoop-mapreduce-client-core-tests.jar, need this before MAPREDUCE-4253



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6273) HistoryFileManager should check whether summaryFile exists to avoid FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state

2015-03-11 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6273:
-
Attachment: MAPREDUCE-6273.000.patch

 HistoryFileManager should check whether summaryFile exists to avoid 
 FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state
 

 Key: MAPREDUCE-6273
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6273
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Attachments: MAPREDUCE-6273.000.patch


 HistoryFileManager should check whether summaryFile exists to avoid 
 FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state,
 I saw the following error message:
 {code}
 2015-02-17 19:13:45,198 ERROR 
 org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Error while trying to 
 move a job to done
 java.io.FileNotFoundException: File does not exist: 
 /user/history/done_intermediate/agd_laci-sluice/job_1423740288390_1884.summary
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:65)
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:55)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1878)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1819)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1771)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:527)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:85)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
   at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown 
 Source)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1181)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1169)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1159)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
   at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:230)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1457)
   at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:318)
   at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:59)
   at 
 org.apache.hadoop.fs.AbstractFileSystem.open(AbstractFileSystem.java:621)
   at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:789)
   at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:785)
   at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
   at org.apache.hadoop.fs.FileContext.open(FileContext.java:785)
   at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.getJobSummary(HistoryFileManager.java:953)
   at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.access$400(HistoryFileManager.java:82)
   at 
 

[jira] [Updated] (MAPREDUCE-6273) HistoryFileManager should check whether summaryFile exists to avoid FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state

2015-03-11 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6273:
-
Status: Patch Available  (was: Open)

 HistoryFileManager should check whether summaryFile exists to avoid 
 FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state
 

 Key: MAPREDUCE-6273
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6273
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Attachments: MAPREDUCE-6273.000.patch


 HistoryFileManager should check whether summaryFile exists to avoid 
 FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state,
 I saw the following error message:
 {code}
 2015-02-17 19:13:45,198 ERROR 
 org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Error while trying to 
 move a job to done
 java.io.FileNotFoundException: File does not exist: 
 /user/history/done_intermediate/agd_laci-sluice/job_1423740288390_1884.summary
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:65)
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:55)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1878)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1819)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1771)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:527)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:85)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
   at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown 
 Source)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1181)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1169)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1159)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
   at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:230)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1457)
   at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:318)
   at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:59)
   at 
 org.apache.hadoop.fs.AbstractFileSystem.open(AbstractFileSystem.java:621)
   at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:789)
   at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:785)
   at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
   at org.apache.hadoop.fs.FileContext.open(FileContext.java:785)
   at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.getJobSummary(HistoryFileManager.java:953)
   at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.access$400(HistoryFileManager.java:82)
   at 
 

[jira] [Created] (MAPREDUCE-6273) HistoryFileManager should check whether summaryFile exists to avoid FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state

2015-03-11 Thread zhihai xu (JIRA)
zhihai xu created MAPREDUCE-6273:


 Summary: HistoryFileManager should check whether summaryFile 
exists to avoid FileNotFoundException causing HistoryFileInfo into MOVE_FAILED 
state
 Key: MAPREDUCE-6273
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6273
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor


HistoryFileManager should check whether summaryFile exists to avoid 
FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state,
I saw the following error message:
{code}
2015-02-17 19:13:45,198 ERROR 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Error while trying to 
move a job to done
java.io.FileNotFoundException: File does not exist: 
/user/history/done_intermediate/agd_laci-sluice/job_1423740288390_1884.summary
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:55)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1878)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1819)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1771)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:527)
at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:85)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1181)
at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1169)
at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1159)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:230)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1457)
at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:318)
at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:59)
at 
org.apache.hadoop.fs.AbstractFileSystem.open(AbstractFileSystem.java:621)
at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:789)
at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:785)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.open(FileContext.java:785)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.getJobSummary(HistoryFileManager.java:953)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.access$400(HistoryFileManager.java:82)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.moveToDone(HistoryFileManager.java:370)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.access$1400(HistoryFileManager.java:295)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$1.run(HistoryFileManager.java:843)
at 

[jira] [Updated] (MAPREDUCE-6273) HistoryFileManager should check whether summaryFile exists to avoid FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state

2015-03-11 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6273:
-
Description: 
HistoryFileManager should check whether summaryFile exists to avoid 
FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state,
I saw the following error message:
{code}
2015-02-17 19:13:45,198 ERROR 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Error while trying to 
move a job to done
java.io.FileNotFoundException: File does not exist: 
/user/history/done_intermediate/agd_laci-sluice/job_1423740288390_1884.summary
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:65)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:55)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1878)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1819)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1771)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:527)
at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:85)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1181)
at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1169)
at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1159)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:230)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1457)
at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:318)
at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:59)
at 
org.apache.hadoop.fs.AbstractFileSystem.open(AbstractFileSystem.java:621)
at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:789)
at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:785)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.open(FileContext.java:785)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.getJobSummary(HistoryFileManager.java:953)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.access$400(HistoryFileManager.java:82)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.moveToDone(HistoryFileManager.java:370)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.access$1400(HistoryFileManager.java:295)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$1.run(HistoryFileManager.java:843)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: 
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does 
not exist: 

[jira] [Commented] (MAPREDUCE-6273) HistoryFileManager should check whether summaryFile exists to avoid FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state

2015-03-11 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358108#comment-14358108
 ] 

zhihai xu commented on MAPREDUCE-6273:
--

I uploaded a patch MAPREDUCE-6273.000.patch, which is a very simple and small 
change.

 HistoryFileManager should check whether summaryFile exists to avoid 
 FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state
 

 Key: MAPREDUCE-6273
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6273
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Attachments: MAPREDUCE-6273.000.patch


 HistoryFileManager should check whether summaryFile exists to avoid 
 FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state,
 I saw the following error message:
 {code}
 2015-02-17 19:13:45,198 ERROR 
 org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Error while trying to 
 move a job to done
 java.io.FileNotFoundException: File does not exist: 
 /user/history/done_intermediate/agd_laci-sluice/job_1423740288390_1884.summary
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:65)
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:55)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1878)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1819)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1771)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:527)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:85)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
   at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown 
 Source)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
   at 
 org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1181)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1169)
   at 
 org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1159)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270)
   at 
 org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
   at org.apache.hadoop.hdfs.DFSInputStream.init(DFSInputStream.java:230)
   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1457)
   at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:318)
   at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:59)
   at 
 org.apache.hadoop.fs.AbstractFileSystem.open(AbstractFileSystem.java:621)
   at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:789)
   at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:785)
   at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
   at org.apache.hadoop.fs.FileContext.open(FileContext.java:785)
   at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.getJobSummary(HistoryFileManager.java:953)
   at 
 

[jira] [Commented] (MAPREDUCE-6273) HistoryFileManager should check whether summaryFile exists to avoid FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state

2015-03-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358134#comment-14358134
 ] 

Hadoop QA commented on MAPREDUCE-6273:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12704088/MAPREDUCE-6273.000.patch
  against trunk revision 85f6d67.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5283//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5283//console

This message is automatically generated.

 HistoryFileManager should check whether summaryFile exists to avoid 
 FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state
 

 Key: MAPREDUCE-6273
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6273
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver
Reporter: zhihai xu
Assignee: zhihai xu
Priority: Minor
 Attachments: MAPREDUCE-6273.000.patch


 HistoryFileManager should check whether summaryFile exists to avoid 
 FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state,
 I saw the following error message:
 {code}
 2015-02-17 19:13:45,198 ERROR 
 org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Error while trying to 
 move a job to done
 java.io.FileNotFoundException: File does not exist: 
 /user/history/done_intermediate/agd_laci-sluice/job_1423740288390_1884.summary
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:65)
   at 
 org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:55)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1878)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1819)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1771)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:527)
   at 
 org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:85)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
   at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown 
 Source)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at