[jira] [Updated] (MAPREDUCE-5294) Shuffle#MergeManager should support org.apache.hadoop.mapreduce.Reducer

2013-07-03 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-5294:
--

Hadoop Flags: Reviewed

 Shuffle#MergeManager should support org.apache.hadoop.mapreduce.Reducer
 ---

 Key: MAPREDUCE-5294
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5294
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: mrv2
Affects Versions: trunk, 2.1.0-beta, 2.0.5-alpha
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5294.1.patch, MAPREDUCE-5294.2.patch, 
 MAPREDUCE-5294.3.patch


 Shuffle#MergeManager only accepts org.apache.hadoop.mapred.Reducer currently. 
 Because of this, Reduce-side Combiner is not used when using the new API, and 
 just ignored. By supporting it and using the feature from ReduceTask, 
 Reduce-side combiner can be enabled with new API. Please see MAPREDUCE-5221 
 for more detail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5358) MRAppMaster throws invalid transitions for JobImpl

2013-07-03 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698697#comment-13698697
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5358:
---

+1 for merging

 MRAppMaster throws invalid transitions for JobImpl
 --

 Key: MAPREDUCE-5358
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5358
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am
Affects Versions: 2.0.1-alpha, 2.0.5-alpha
Reporter: Devaraj K
Assignee: Devaraj K
 Attachments: MAPREDUCE-5358.patch


 {code:xml}
 2013-06-26 11:39:50,128 ERROR [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event 
 at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 JOB_TASK_ATTEMPT_COMPLETED at SUCCEEDED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:720)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:962)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:958)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:128)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 {code:xml}
 2013-06-26 11:39:50,129 ERROR [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event 
 at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 JOB_MAP_TASK_RESCHEDULED at SUCCEEDED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:720)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:962)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:958)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:128)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
   at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2013-07-03 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698703#comment-13698703
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5335:
---

+1

 Rename Job Tracker terminology in ShuffleSchedulerImpl
 --

 Key: MAPREDUCE-5335
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: applicationmaster
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Devaraj K
Assignee: Devaraj K
 Attachments: MAPREDUCE-5335.patch


 {code:xml}
 2013-06-17 17:27:30,134 INFO [fetcher#2] 
 org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
 failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
 {code}
 {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
   // Notify the JobTracker
   // after every read error, if 'reportReadErrorImmediately' is true or
   // after every 'maxFetchFailuresBeforeReporting' failures
   private void checkAndInformJobTracker(
   int failures, TaskAttemptID mapId, boolean readError,
   boolean connectExcpt) {
 if (connectExcpt || (reportReadErrorImmediately  readError)
 || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
   LOG.info(Reporting fetch failure for  + mapId +  to jobtracker.);
   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
 mapId);
 }
   }
  {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2013-07-03 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-5335:
--

Hadoop Flags: Reviewed

 Rename Job Tracker terminology in ShuffleSchedulerImpl
 --

 Key: MAPREDUCE-5335
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: applicationmaster
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Devaraj K
Assignee: Devaraj K
 Attachments: MAPREDUCE-5335.patch


 {code:xml}
 2013-06-17 17:27:30,134 INFO [fetcher#2] 
 org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
 failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
 {code}
 {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
   // Notify the JobTracker
   // after every read error, if 'reportReadErrorImmediately' is true or
   // after every 'maxFetchFailuresBeforeReporting' failures
   private void checkAndInformJobTracker(
   int failures, TaskAttemptID mapId, boolean readError,
   boolean connectExcpt) {
 if (connectExcpt || (reportReadErrorImmediately  readError)
 || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
   LOG.info(Reporting fetch failure for  + mapId +  to jobtracker.);
   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
 mapId);
 }
   }
  {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5359) JobHistory should not use File.separator to match timestamp in path

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698820#comment-13698820
 ] 

Hudson commented on MAPREDUCE-5359:
---

Integrated in Hadoop-Yarn-trunk #259 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/259/])
MAPREDUCE-5359. JobHistory should not use File.separator to match timestamp 
in path. Contributed by Chuan Liu. (Revision 1499153)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499153
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java


 JobHistory should not use File.separator to match timestamp in path
 ---

 Key: MAPREDUCE-5359
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5359
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: MAPREDUCE-5359-trunk.2.patch, MAPREDUCE-5359-trunk.patch


 In {{HistoryFileManager.getTimestampPartFromPath()}} method, we use the 
 following regular expression to match the timestamp in a Path object. 
 {code:java}
 \\d{4} + \\ + File.separator +  \\d{2} + \\ + File.separator + 
 \\d{2}
 {code}
 This is incorrect because Path uses backslash even for Windows path while 
 File.separator is platform dependent, and is a forward slash on Windows.
 This leads to failure matching the timestamp on Windows. One consequence is 
 that {{addDirectoryToSerialNumberIndex()}} also failed. Later, 
 {{getFileInfo()}} will fail if the job info is not in cache or intermediate 
 directory.
 The test case {{TestJobHistoryParsing.testScanningOldDirs()}} tests exactly 
 the above scenario and fails on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5357) Job staging directory owner checking could fail on Windows

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698815#comment-13698815
 ] 

Hudson commented on MAPREDUCE-5357:
---

Integrated in Hadoop-Yarn-trunk #259 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/259/])
MAPREDUCE-5357. Job staging directory owner checking could fail on Windows. 
(Revision 1499210)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499210
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java


 Job staging directory owner checking could fail on Windows
 --

 Key: MAPREDUCE-5357
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5357
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: MAPREDUCE-5357-trunk.patch


 In {{JobSubmissionFiles.getStagingDir()}}, we have following code that will 
 throw exception if the directory owner is not the current user.
 {code:java}
   String owner = fsStatus.getOwner();
   if (!(owner.equals(currentUser) || owner.equals(realUser))) {
  throw new IOException(The ownership on the staging directory  +
   stagingArea +  is not as expected.  +
   It is owned by  + owner + . The directory must  +
   be owned by the submitter  + currentUser +  or  +
   by  + realUser);
   }
 {code}
 This check will fail on Windows when the underlying file system is 
 LocalFileSystem. Because on Windows, the default file or directory owner 
 could be Administrators group if the user belongs to Administrators group.
 Quite a few MR unit tests that runs MR mini cluster with localFs as 
 underlying file system fail because of this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-3193) FileInputFormat doesn't read files recursively in the input path dir

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698816#comment-13698816
 ] 

Hudson commented on MAPREDUCE-3193:
---

Integrated in Hadoop-Yarn-trunk #259 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/259/])
MAPREDUCE-3193. FileInputFormat doesn't read files recursively in the input 
path dir. Contributed by Devaraj K (Revision 1499125)

 Result = FAILURE
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499125
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/input
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestFileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFileInputFormat.java


 FileInputFormat doesn't read files recursively in the input path dir
 

 Key: MAPREDUCE-3193
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3193
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1, mrv2
Affects Versions: 0.23.2, 2.0.0-alpha, 3.0.0
Reporter: Ramgopal N
Assignee: Devaraj K
 Fix For: 3.0.0, 2.3.0, 0.23.10

 Attachments: MAPREDUCE-3193-1.patch, MAPREDUCE-3193-2.patch, 
 MAPREDUCE-3193-2.patch, MAPREDUCE-3193-3.patch, MAPREDUCE-3193-4.patch, 
 MAPREDUCE-3193-5.patch, MAPREDUCE-3193.patch, MAPREDUCE-3193.security.patch


 java.io.FileNotFoundException is thrown,if input file is more than one folder 
 level deep and the job is getting failed.
 Example:Input file is /r1/r2/input.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5355) MiniMRYarnCluster with localFs does not work on Windows

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698818#comment-13698818
 ] 

Hudson commented on MAPREDUCE-5355:
---

Integrated in Hadoop-Yarn-trunk #259 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/259/])
MAPREDUCE-5355. MiniMRYarnCluster with localFs does not work on Windows. 
Contributed by Chuan Liu. (Revision 1499148)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499148
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/MiniMRYarnCluster.java


 MiniMRYarnCluster with localFs does not work on Windows
 ---

 Key: MAPREDUCE-5355
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5355
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: MAPREDUCE-5355-branch-2.patch, 
 MAPREDUCE-5355-trunk.2.patch, MAPREDUCE-5355-trunk.patch


 When MiniMRYarnCluster configured to run on localFs instead of remoteFs, i.e. 
 MiniDFSCluster, the job will fail on Windows. The error message looks like 
 the following.
 {noformat}
 java.io.IOException: Job status not available
 {noformat}
 In my testing, the following unit tests hit this exception.
 * TestMRJobsWithHistoryService
 * TestClusterMRNotification
 * TestJobCleanup
 * TestJobCounters
 * TestMiniMRClientCluster
 * TestJobOutputCommitter
 * TestMRAppWithCombiner
 * TestMROldApiJobs
 * TestSpeculativeExecution

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-3193) FileInputFormat doesn't read files recursively in the input path dir

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698910#comment-13698910
 ] 

Hudson commented on MAPREDUCE-3193:
---

Integrated in Hadoop-Hdfs-0.23-Build #657 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/657/])
svn merge -c 1499125 FIXES: MAPREDUCE-3193. FileInputFormat doesn't read 
files recursively in the input path dir. Contributed by Devaraj K (Revision 
1499131)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499131
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/input
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestFileInputFormat.java
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFileInputFormat.java


 FileInputFormat doesn't read files recursively in the input path dir
 

 Key: MAPREDUCE-3193
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3193
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1, mrv2
Affects Versions: 0.23.2, 2.0.0-alpha, 3.0.0
Reporter: Ramgopal N
Assignee: Devaraj K
 Fix For: 3.0.0, 2.3.0, 0.23.10

 Attachments: MAPREDUCE-3193-1.patch, MAPREDUCE-3193-2.patch, 
 MAPREDUCE-3193-2.patch, MAPREDUCE-3193-3.patch, MAPREDUCE-3193-4.patch, 
 MAPREDUCE-3193-5.patch, MAPREDUCE-3193.patch, MAPREDUCE-3193.security.patch


 java.io.FileNotFoundException is thrown,if input file is more than one folder 
 level deep and the job is getting failed.
 Example:Input file is /r1/r2/input.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-3193) FileInputFormat doesn't read files recursively in the input path dir

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698918#comment-13698918
 ] 

Hudson commented on MAPREDUCE-3193:
---

Integrated in Hadoop-Hdfs-trunk #1449 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1449/])
MAPREDUCE-3193. FileInputFormat doesn't read files recursively in the input 
path dir. Contributed by Devaraj K (Revision 1499125)

 Result = FAILURE
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499125
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/input
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestFileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFileInputFormat.java


 FileInputFormat doesn't read files recursively in the input path dir
 

 Key: MAPREDUCE-3193
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3193
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1, mrv2
Affects Versions: 0.23.2, 2.0.0-alpha, 3.0.0
Reporter: Ramgopal N
Assignee: Devaraj K
 Fix For: 3.0.0, 2.3.0, 0.23.10

 Attachments: MAPREDUCE-3193-1.patch, MAPREDUCE-3193-2.patch, 
 MAPREDUCE-3193-2.patch, MAPREDUCE-3193-3.patch, MAPREDUCE-3193-4.patch, 
 MAPREDUCE-3193-5.patch, MAPREDUCE-3193.patch, MAPREDUCE-3193.security.patch


 java.io.FileNotFoundException is thrown,if input file is more than one folder 
 level deep and the job is getting failed.
 Example:Input file is /r1/r2/input.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5357) Job staging directory owner checking could fail on Windows

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698917#comment-13698917
 ] 

Hudson commented on MAPREDUCE-5357:
---

Integrated in Hadoop-Hdfs-trunk #1449 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1449/])
MAPREDUCE-5357. Job staging directory owner checking could fail on Windows. 
(Revision 1499210)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499210
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java


 Job staging directory owner checking could fail on Windows
 --

 Key: MAPREDUCE-5357
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5357
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: MAPREDUCE-5357-trunk.patch


 In {{JobSubmissionFiles.getStagingDir()}}, we have following code that will 
 throw exception if the directory owner is not the current user.
 {code:java}
   String owner = fsStatus.getOwner();
   if (!(owner.equals(currentUser) || owner.equals(realUser))) {
  throw new IOException(The ownership on the staging directory  +
   stagingArea +  is not as expected.  +
   It is owned by  + owner + . The directory must  +
   be owned by the submitter  + currentUser +  or  +
   by  + realUser);
   }
 {code}
 This check will fail on Windows when the underlying file system is 
 LocalFileSystem. Because on Windows, the default file or directory owner 
 could be Administrators group if the user belongs to Administrators group.
 Quite a few MR unit tests that runs MR mini cluster with localFs as 
 underlying file system fail because of this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5355) MiniMRYarnCluster with localFs does not work on Windows

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698920#comment-13698920
 ] 

Hudson commented on MAPREDUCE-5355:
---

Integrated in Hadoop-Hdfs-trunk #1449 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1449/])
MAPREDUCE-5355. MiniMRYarnCluster with localFs does not work on Windows. 
Contributed by Chuan Liu. (Revision 1499148)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499148
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/MiniMRYarnCluster.java


 MiniMRYarnCluster with localFs does not work on Windows
 ---

 Key: MAPREDUCE-5355
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5355
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: MAPREDUCE-5355-branch-2.patch, 
 MAPREDUCE-5355-trunk.2.patch, MAPREDUCE-5355-trunk.patch


 When MiniMRYarnCluster configured to run on localFs instead of remoteFs, i.e. 
 MiniDFSCluster, the job will fail on Windows. The error message looks like 
 the following.
 {noformat}
 java.io.IOException: Job status not available
 {noformat}
 In my testing, the following unit tests hit this exception.
 * TestMRJobsWithHistoryService
 * TestClusterMRNotification
 * TestJobCleanup
 * TestJobCounters
 * TestMiniMRClientCluster
 * TestJobOutputCommitter
 * TestMRAppWithCombiner
 * TestMROldApiJobs
 * TestSpeculativeExecution

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5359) JobHistory should not use File.separator to match timestamp in path

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698922#comment-13698922
 ] 

Hudson commented on MAPREDUCE-5359:
---

Integrated in Hadoop-Hdfs-trunk #1449 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1449/])
MAPREDUCE-5359. JobHistory should not use File.separator to match timestamp 
in path. Contributed by Chuan Liu. (Revision 1499153)

 Result = FAILURE
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499153
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java


 JobHistory should not use File.separator to match timestamp in path
 ---

 Key: MAPREDUCE-5359
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5359
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: MAPREDUCE-5359-trunk.2.patch, MAPREDUCE-5359-trunk.patch


 In {{HistoryFileManager.getTimestampPartFromPath()}} method, we use the 
 following regular expression to match the timestamp in a Path object. 
 {code:java}
 \\d{4} + \\ + File.separator +  \\d{2} + \\ + File.separator + 
 \\d{2}
 {code}
 This is incorrect because Path uses backslash even for Windows path while 
 File.separator is platform dependent, and is a forward slash on Windows.
 This leads to failure matching the timestamp on Windows. One consequence is 
 that {{addDirectoryToSerialNumberIndex()}} also failed. Later, 
 {{getFileInfo()}} will fail if the job info is not in cache or intermediate 
 directory.
 The test case {{TestJobHistoryParsing.testScanningOldDirs()}} tests exactly 
 the above scenario and fails on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5355) MiniMRYarnCluster with localFs does not work on Windows

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698992#comment-13698992
 ] 

Hudson commented on MAPREDUCE-5355:
---

Integrated in Hadoop-Mapreduce-trunk #1476 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1476/])
MAPREDUCE-5355. MiniMRYarnCluster with localFs does not work on Windows. 
Contributed by Chuan Liu. (Revision 1499148)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499148
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/MiniMRYarnCluster.java


 MiniMRYarnCluster with localFs does not work on Windows
 ---

 Key: MAPREDUCE-5355
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5355
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: MAPREDUCE-5355-branch-2.patch, 
 MAPREDUCE-5355-trunk.2.patch, MAPREDUCE-5355-trunk.patch


 When MiniMRYarnCluster configured to run on localFs instead of remoteFs, i.e. 
 MiniDFSCluster, the job will fail on Windows. The error message looks like 
 the following.
 {noformat}
 java.io.IOException: Job status not available
 {noformat}
 In my testing, the following unit tests hit this exception.
 * TestMRJobsWithHistoryService
 * TestClusterMRNotification
 * TestJobCleanup
 * TestJobCounters
 * TestMiniMRClientCluster
 * TestJobOutputCommitter
 * TestMRAppWithCombiner
 * TestMROldApiJobs
 * TestSpeculativeExecution

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5359) JobHistory should not use File.separator to match timestamp in path

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698994#comment-13698994
 ] 

Hudson commented on MAPREDUCE-5359:
---

Integrated in Hadoop-Mapreduce-trunk #1476 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1476/])
MAPREDUCE-5359. JobHistory should not use File.separator to match timestamp 
in path. Contributed by Chuan Liu. (Revision 1499153)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499153
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JobHistoryUtils.java


 JobHistory should not use File.separator to match timestamp in path
 ---

 Key: MAPREDUCE-5359
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5359
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: MAPREDUCE-5359-trunk.2.patch, MAPREDUCE-5359-trunk.patch


 In {{HistoryFileManager.getTimestampPartFromPath()}} method, we use the 
 following regular expression to match the timestamp in a Path object. 
 {code:java}
 \\d{4} + \\ + File.separator +  \\d{2} + \\ + File.separator + 
 \\d{2}
 {code}
 This is incorrect because Path uses backslash even for Windows path while 
 File.separator is platform dependent, and is a forward slash on Windows.
 This leads to failure matching the timestamp on Windows. One consequence is 
 that {{addDirectoryToSerialNumberIndex()}} also failed. Later, 
 {{getFileInfo()}} will fail if the job info is not in cache or intermediate 
 directory.
 The test case {{TestJobHistoryParsing.testScanningOldDirs()}} tests exactly 
 the above scenario and fails on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5357) Job staging directory owner checking could fail on Windows

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698988#comment-13698988
 ] 

Hudson commented on MAPREDUCE-5357:
---

Integrated in Hadoop-Mapreduce-trunk #1476 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1476/])
MAPREDUCE-5357. Job staging directory owner checking could fail on Windows. 
(Revision 1499210)

 Result = SUCCESS
cnauroth : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499210
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java


 Job staging directory owner checking could fail on Windows
 --

 Key: MAPREDUCE-5357
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5357
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 3.0.0, 2.1.0-beta

 Attachments: MAPREDUCE-5357-trunk.patch


 In {{JobSubmissionFiles.getStagingDir()}}, we have following code that will 
 throw exception if the directory owner is not the current user.
 {code:java}
   String owner = fsStatus.getOwner();
   if (!(owner.equals(currentUser) || owner.equals(realUser))) {
  throw new IOException(The ownership on the staging directory  +
   stagingArea +  is not as expected.  +
   It is owned by  + owner + . The directory must  +
   be owned by the submitter  + currentUser +  or  +
   by  + realUser);
   }
 {code}
 This check will fail on Windows when the underlying file system is 
 LocalFileSystem. Because on Windows, the default file or directory owner 
 could be Administrators group if the user belongs to Administrators group.
 Quite a few MR unit tests that runs MR mini cluster with localFs as 
 underlying file system fail because of this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-3193) FileInputFormat doesn't read files recursively in the input path dir

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698989#comment-13698989
 ] 

Hudson commented on MAPREDUCE-3193:
---

Integrated in Hadoop-Mapreduce-trunk #1476 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1476/])
MAPREDUCE-3193. FileInputFormat doesn't read files recursively in the input 
path dir. Contributed by Devaraj K (Revision 1499125)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499125
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/input
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestFileInputFormat.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFileInputFormat.java


 FileInputFormat doesn't read files recursively in the input path dir
 

 Key: MAPREDUCE-3193
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3193
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1, mrv2
Affects Versions: 0.23.2, 2.0.0-alpha, 3.0.0
Reporter: Ramgopal N
Assignee: Devaraj K
 Fix For: 3.0.0, 2.3.0, 0.23.10

 Attachments: MAPREDUCE-3193-1.patch, MAPREDUCE-3193-2.patch, 
 MAPREDUCE-3193-2.patch, MAPREDUCE-3193-3.patch, MAPREDUCE-3193-4.patch, 
 MAPREDUCE-3193-5.patch, MAPREDUCE-3193.patch, MAPREDUCE-3193.security.patch


 java.io.FileNotFoundException is thrown,if input file is more than one folder 
 level deep and the job is getting failed.
 Example:Input file is /r1/r2/input.txt

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5358) MRAppMaster throws invalid transitions for JobImpl

2013-07-03 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699004#comment-13699004
 ] 

Jason Lowe commented on MAPREDUCE-5358:
---

+1

 MRAppMaster throws invalid transitions for JobImpl
 --

 Key: MAPREDUCE-5358
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5358
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am
Affects Versions: 2.0.1-alpha, 2.0.5-alpha
Reporter: Devaraj K
Assignee: Devaraj K
 Attachments: MAPREDUCE-5358.patch


 {code:xml}
 2013-06-26 11:39:50,128 ERROR [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event 
 at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 JOB_TASK_ATTEMPT_COMPLETED at SUCCEEDED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:720)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:962)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:958)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:128)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 {code:xml}
 2013-06-26 11:39:50,129 ERROR [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event 
 at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 JOB_MAP_TASK_RESCHEDULED at SUCCEEDED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:720)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:962)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:958)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:128)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
   at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5221) Reduce side Combiner is not used when using the new API

2013-07-03 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699029#comment-13699029
 ] 

Karthik Kambatla commented on MAPREDUCE-5221:
-

bq. Should we merge these patches into one patch?

Merging them into one patch/JIRA will make it a lot easier to comprehend. 
Thanks.

 Reduce side Combiner is not used when using the new API
 ---

 Key: MAPREDUCE-5221
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5221
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5221.1.patch, MAPREDUCE-5221.2.patch, 
 MAPREDUCE-5221.3.patch, MAPREDUCE-5221.4.patch


 If a combiner is specified using o.a.h.mapreduce.Job.setCombinerClass - this 
 will silently ignored on the reduce side since the reduce side usage is only 
 aware of the old api combiner.
 This doesn't fail the job - since the new combiner key does not deprecate the 
 old key.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5358) MRAppMaster throws invalid transitions for JobImpl

2013-07-03 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated MAPREDUCE-5358:
--

   Resolution: Fixed
Fix Version/s: 2.3.0
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks, Devaraj!  I committed this to trunk and branch-2.

 MRAppMaster throws invalid transitions for JobImpl
 --

 Key: MAPREDUCE-5358
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5358
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am
Affects Versions: 2.0.1-alpha, 2.0.5-alpha
Reporter: Devaraj K
Assignee: Devaraj K
 Fix For: 3.0.0, 2.3.0

 Attachments: MAPREDUCE-5358.patch


 {code:xml}
 2013-06-26 11:39:50,128 ERROR [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event 
 at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 JOB_TASK_ATTEMPT_COMPLETED at SUCCEEDED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:720)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:962)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:958)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:128)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 {code:xml}
 2013-06-26 11:39:50,129 ERROR [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event 
 at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 JOB_MAP_TASK_RESCHEDULED at SUCCEEDED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:720)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:962)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:958)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:128)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
   at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5358) MRAppMaster throws invalid transitions for JobImpl

2013-07-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699046#comment-13699046
 ] 

Hudson commented on MAPREDUCE-5358:
---

Integrated in Hadoop-trunk-Commit #4040 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4040/])
MAPREDUCE-5358. MRAppMaster throws invalid transitions for JobImpl. 
Contributed by Devaraj K (Revision 1499425)

 Result = SUCCESS
jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1499425
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java


 MRAppMaster throws invalid transitions for JobImpl
 --

 Key: MAPREDUCE-5358
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5358
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mr-am
Affects Versions: 2.0.1-alpha, 2.0.5-alpha
Reporter: Devaraj K
Assignee: Devaraj K
 Fix For: 3.0.0, 2.3.0

 Attachments: MAPREDUCE-5358.patch


 {code:xml}
 2013-06-26 11:39:50,128 ERROR [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event 
 at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 JOB_TASK_ATTEMPT_COMPLETED at SUCCEEDED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:720)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:962)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:958)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:128)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 {code:xml}
 2013-06-26 11:39:50,129 ERROR [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event 
 at current state
 org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
 JOB_MAP_TASK_RESCHEDULED at SUCCEEDED
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:301)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
   at 
 org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:720)
   at 
 org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:119)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:962)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:958)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:128)
   at 
 org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:77)
   at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5351) JobTracker memory leak caused by CleanupQueue reopening FileSystem

2013-07-03 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated MAPREDUCE-5351:
--

Attachment: MAPREDUCE-5351-addendum-1.patch

 JobTracker memory leak caused by CleanupQueue reopening FileSystem
 --

 Key: MAPREDUCE-5351
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5351
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 1.1.2
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Critical
 Fix For: 1.2.1

 Attachments: MAPREDUCE-5351-1.patch, MAPREDUCE-5351-2.patch, 
 MAPREDUCE-5351-addendum-1.patch, MAPREDUCE-5351-addendum.patch, 
 MAPREDUCE-5351.patch


 When a job is completed, closeAllForUGI is called to close all the cached 
 FileSystems in the FileSystem cache.  However, the CleanupQueue may run after 
 this occurs and call FileSystem.get() to delete the staging directory, adding 
 a FileSystem to the cache that will never be closed.
 People on the user-list have reported this causing their JobTrackers to OOME 
 every two weeks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5351) JobTracker memory leak caused by CleanupQueue reopening FileSystem

2013-07-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699196#comment-13699196
 ] 

Sandy Ryza commented on MAPREDUCE-5351:
---

Uploaded a new patch that updates the comments and the test case.

 JobTracker memory leak caused by CleanupQueue reopening FileSystem
 --

 Key: MAPREDUCE-5351
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5351
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 1.1.2
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Critical
 Fix For: 1.2.1

 Attachments: MAPREDUCE-5351-1.patch, MAPREDUCE-5351-2.patch, 
 MAPREDUCE-5351-addendum-1.patch, MAPREDUCE-5351-addendum.patch, 
 MAPREDUCE-5351.patch


 When a job is completed, closeAllForUGI is called to close all the cached 
 FileSystems in the FileSystem cache.  However, the CleanupQueue may run after 
 this occurs and call FileSystem.get() to delete the staging directory, adding 
 a FileSystem to the cache that will never be closed.
 People on the user-list have reported this causing their JobTrackers to OOME 
 every two weeks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5351) JobTracker memory leak caused by CleanupQueue reopening FileSystem

2013-07-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699203#comment-13699203
 ] 

Hadoop QA commented on MAPREDUCE-5351:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12590672/MAPREDUCE-5351-addendum-1.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3826//console

This message is automatically generated.

 JobTracker memory leak caused by CleanupQueue reopening FileSystem
 --

 Key: MAPREDUCE-5351
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5351
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobtracker
Affects Versions: 1.1.2
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Critical
 Fix For: 1.2.1

 Attachments: MAPREDUCE-5351-1.patch, MAPREDUCE-5351-2.patch, 
 MAPREDUCE-5351-addendum-1.patch, MAPREDUCE-5351-addendum.patch, 
 MAPREDUCE-5351.patch


 When a job is completed, closeAllForUGI is called to close all the cached 
 FileSystems in the FileSystem cache.  However, the CleanupQueue may run after 
 this occurs and call FileSystem.get() to delete the staging directory, adding 
 a FileSystem to the cache that will never be closed.
 People on the user-list have reported this causing their JobTrackers to OOME 
 every two weeks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5330) JVM manager should not forcefully kill the process on Signal.TERM on Windows

2013-07-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699219#comment-13699219
 ] 

Bikas Saha commented on MAPREDUCE-5330:
---

The TERM signal typically notifies the process that it should clean up since it 
will be killed shortly. Not sending it any signal doesnt quite match up that 
use case. Are we sure that all cases of TERM are followed up by KILL?

 JVM manager should not forcefully kill the process on Signal.TERM on Windows
 

 Key: MAPREDUCE-5330
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5330
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: MAPREDUCE-5330.patch


 In MapReduce, we sometimes kill a task's JVM before it naturally shuts down 
 if we want to launch other tasks (look in 
 JvmManager$JvmManagerForType.reapJvm). This behavior means that if the map 
 task process is in the middle of doing some cleanup/finalization after the 
 task is done, it might be interrupted/killed without giving it a chance. 
 In the Microsoft's Hadoop Service, after a Map/Reduce task is done and during 
 closing file systems in a special shutdown hook, we're typically uploading 
 storage (ASV in our context) usage metrics to Microsoft Azure Tables. So if 
 this kill happens these metrics get lost. The impact is that for many MR jobs 
 we don't see accurate metrics reported most of the time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5187) Create mapreduce command scripts on Windows

2013-07-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699258#comment-13699258
 ] 

Bikas Saha commented on MAPREDUCE-5187:
---

looks ok to me. +1

 Create mapreduce command scripts on Windows
 ---

 Key: MAPREDUCE-5187
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5187
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 3.0.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: MAPREDUCE-5187-trunk.2.patch, MAPREDUCE-5187-trunk.patch


 We don't have mapreduce command scripts, e.g. mapred.cmd, on Windows in trunk 
 code base right now. As a result, some import functionality like Job history 
 server is not available. This JIRA is created to track this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5278) Distributed cache is broken when JT staging dir is not on the default FS

2013-07-03 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699264#comment-13699264
 ] 

Bikas Saha commented on MAPREDUCE-5278:
---

The config string is still being used in the test. Other than that it looks 
good. We can increase the visibility of the named var and set @Private on it. 
Then the test could use it.

 Distributed cache is broken when JT staging dir is not on the default FS
 

 Key: MAPREDUCE-5278
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5278
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: distributed-cache
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: MAPREDUCE-5278.2.patch, MAPREDUCE-5278.3.patch, 
 MAPREDUCE-5278.patch


 Today, the JobTracker staging dir (mapreduce.jobtracker.staging.root.dir) is 
 set to point to HDFS, even though other file systems (e.g. Amazon S3 file 
 system and Windows ASV file system) are the default file systems.
 For ASV, this config was chosen and there are a few reasons why:
 1. To prevent leak of the storage account credentials to the user's storage 
 account; 
 2. It uses HDFS for the transient job files what is good for two reasons – a) 
 it does not flood the user's storage account with irrelevant data/files b) it 
 leverages HDFS locality for small files
 However, this approach conflicts with how distributed cache caching works, 
 completely negating the feature's functionality.
 When files are added to the distributed cache (thru files/achieves/libjars 
 hadoop generic options), they are copied to the job tracker staging dir only 
 if they reside on a file system different that the jobtracker's. Later on, 
 this path is used as a key to cache the files locally on the tasktracker's 
 machine, and avoid localization (download/unzip) of the distributed cache 
 files if they are already localized.
 In this configuration the caching is completely disabled and we always end up 
 copying dist cache files to the job tracker's staging dir first and 
 localizing them on the task tracker machine second.
 This is especially not good for Oozie scenarios as Oozie uses dist cache to 
 populate Hive/Pig jars throughout the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5213) Re-assess TokenCache methods marked @Private

2013-07-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated MAPREDUCE-5213:


Issue Type: Sub-task  (was: Bug)
Parent: MAPREDUCE-5108

 Re-assess TokenCache methods marked @Private
 

 Key: MAPREDUCE-5213
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5213
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: mr-5213-1.patch, mr-5213-2.patch


 While looking at the source, noticed that TokenCache#loadTokens methods are 
 marked @Private but not used anywhere. 
 We should either remove those methods or mark them Public or LimitedPrivate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5213) Re-assess TokenCache methods marked @Private

2013-07-03 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated MAPREDUCE-5213:


Issue Type: Bug  (was: Sub-task)
Parent: (was: MAPREDUCE-5108)

 Re-assess TokenCache methods marked @Private
 

 Key: MAPREDUCE-5213
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5213
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Minor
 Attachments: mr-5213-1.patch, mr-5213-2.patch


 While looking at the source, noticed that TokenCache#loadTokens methods are 
 marked @Private but not used anywhere. 
 We should either remove those methods or mark them Public or LimitedPrivate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5367) Local jobs all use same local working directory

2013-07-03 Thread Ben Podgursky (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699450#comment-13699450
 ] 

Ben Podgursky commented on MAPREDUCE-5367:
--

For what it's worth, our current hacky workaround to this bug is prefixing the 
working directory with a UUID in LocalJobRunner:

  String tmpDir = jobDir + / + id + -+ UUID.randomUUID();
  this.localJobDir = localFs.makeQualified(conf.getLocalPath(tmpDir));

and deleting it on job cleanup:

   localFs.delete(localJobDir, true);

But I'm sure there's a cleaner way to scope the paths.

 Local jobs all use same local working directory
 ---

 Key: MAPREDUCE-5367
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5367
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 This means that local jobs, even in different JVMs, can't run concurrently 
 because they might delete each other's files during work directory setup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5363) Fix doc and spelling for TaskCompletionEvent#getTaskStatus and getStatus

2013-07-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699452#comment-13699452
 ] 

Akira AJISAKA commented on MAPREDUCE-5363:
--

Thank you for your advice. I uploaded a new patch.

 Fix doc and spelling for TaskCompletionEvent#getTaskStatus and getStatus
 

 Key: MAPREDUCE-5363
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5363
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1, mrv2
Affects Versions: 1.1.2, 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: MAPREDUCE-5363-1.patch, MAPREDUCE-5363-2.patch, 
 MAPREDUCE-5363-3.patch


 The doc for TaskCompletionEvent#get(Task)Status in both MR1 and MR2 is
 {code}
 Returns enum Status.SUCESS or Status.FAILURE.
 @return task tracker status
 {code}
 The actual values that the Status enum can take are
 FAILED, KILLED, SUCCEEDED, OBSOLETE, TIPFAILED

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5363) Fix doc and spelling for TaskCompletionEvent#getTaskStatus and getStatus

2013-07-03 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated MAPREDUCE-5363:
-

Attachment: MAPREDUCE-5363-3.patch

 Fix doc and spelling for TaskCompletionEvent#getTaskStatus and getStatus
 

 Key: MAPREDUCE-5363
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5363
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1, mrv2
Affects Versions: 1.1.2, 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: MAPREDUCE-5363-1.patch, MAPREDUCE-5363-2.patch, 
 MAPREDUCE-5363-3.patch


 The doc for TaskCompletionEvent#get(Task)Status in both MR1 and MR2 is
 {code}
 Returns enum Status.SUCESS or Status.FAILURE.
 @return task tracker status
 {code}
 The actual values that the Status enum can take are
 FAILED, KILLED, SUCCEEDED, OBSOLETE, TIPFAILED

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5367) Local jobs all use same local working directory

2013-07-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699456#comment-13699456
 ] 

Sandy Ryza commented on MAPREDUCE-5367:
---

My patch was going to just add the job ID in.  Is there a reason that the 
random UUID is needed on top of that?

 Local jobs all use same local working directory
 ---

 Key: MAPREDUCE-5367
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5367
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 This means that local jobs, even in different JVMs, can't run concurrently 
 because they might delete each other's files during work directory setup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5367) Local jobs all use same local working directory

2013-07-03 Thread Ben Podgursky (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699463#comment-13699463
 ] 

Ben Podgursky commented on MAPREDUCE-5367:
--

Yeah, the problem is if the jobs are running in different JVMs, the job 
numbering starts at over at 0001, so there are still conflicts (for example if 
our build server starts two tests concurrently.) 

 Local jobs all use same local working directory
 ---

 Key: MAPREDUCE-5367
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5367
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 This means that local jobs, even in different JVMs, can't run concurrently 
 because they might delete each other's files during work directory setup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5364) Deadlock between RenewalTimerTask methods cancel() and run()

2013-07-03 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699466#comment-13699466
 ] 

Alejandro Abdelnur commented on MAPREDUCE-5364:
---

+1, LGTM.

 Deadlock between RenewalTimerTask methods cancel() and run()
 

 Key: MAPREDUCE-5364
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5364
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Attachments: mr-5364-1.patch


 MAPREDUCE-4860 introduced a local variable {{cancelled}} in 
 {{RenewalTimerTask}} to fix the race where {{DelegationTokenRenewal}} 
 attempts to renew a token even after the job is removed. However, the patch 
 also makes {{run()}} and {{cancel()}} synchronized methods leading to a 
 potential deadlock against {{run()}}'s catch-block (error-path).
 The deadlock stacks below:
 {noformat}
  - 
 org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$RenewalTimerTask.cancel()
  @bci=0, line=240 (Interpreted frame)
  - 
 org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal.removeDelegationTokenRenewalForJob(org.apache.hadoop.mapreduce.JobID)
  @bci=109, line=319 (Interpreted frame)
 {noformat}
 {noformat}
  - 
 org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal.removeFailedDelegationToken(org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$DelegationTokenToRenew)
  @bci=62, line=297 (Interpreted frame)
  - 
 org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal.access$300(org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$DelegationTokenToRenew)
  @bci=1, line=47 (Interpreted frame)
  - 
 org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$RenewalTimerTask.run()
  @bci=148, line=234 (Interpreted frame)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5363) Fix doc and spelling for TaskCompletionEvent#getTaskStatus and getStatus

2013-07-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699469#comment-13699469
 ] 

Hadoop QA commented on MAPREDUCE-5363:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12590719/MAPREDUCE-5363-3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3827//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3827//console

This message is automatically generated.

 Fix doc and spelling for TaskCompletionEvent#getTaskStatus and getStatus
 

 Key: MAPREDUCE-5363
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5363
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv1, mrv2
Affects Versions: 1.1.2, 2.1.0-beta
Reporter: Sandy Ryza
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: MAPREDUCE-5363-1.patch, MAPREDUCE-5363-2.patch, 
 MAPREDUCE-5363-3.patch


 The doc for TaskCompletionEvent#get(Task)Status in both MR1 and MR2 is
 {code}
 Returns enum Status.SUCESS or Status.FAILURE.
 @return task tracker status
 {code}
 The actual values that the Status enum can take are
 FAILED, KILLED, SUCCEEDED, OBSOLETE, TIPFAILED

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-4374) Fix child task environment variable config and add support for Windows

2013-07-03 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated MAPREDUCE-4374:
-

Attachment: MAPREDUCE-4374-trunk.patch

Attach a patch for trunk.

 Fix child task environment variable config and add support for Windows
 --

 Key: MAPREDUCE-4374
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4374
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win, 2.1.0-beta
Reporter: Chuan Liu
Assignee: Chuan Liu
Priority: Minor
 Fix For: 1-win

 Attachments: MAPREDUCE-4374-branch-1-win-2.patch, 
 MAPREDUCE-4374-branch-1-win.patch, MAPREDUCE-4374-trunk.patch


 In HADOOP-2838, a new feature was introduced to set environment variables via 
 the Hadoop config 'mapred.child.env' for child tasks. There are some further 
 fixes and improvements around this feature, e.g. HADOOP-5981 were a bug fix; 
 MAPREDUCE-478 broke the config into 'mapred.map.child.env' and 
 'mapred.reduce.child.env'.  However the current implementation is still not 
 complete. It does not match its documentation or original intend as I 
 believe. Also, by using ‘:’ (colon) and ‘;’ (semicolon) in the configuration 
 syntax, we will have problems using them on Windows because ‘:’ appears very 
 often in Windows path as in “C:\”, and environment variables are used very 
 often to hold path names. The Jira is created to fix the problem and provide 
 support on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5364) Deadlock between RenewalTimerTask methods cancel() and run()

2013-07-03 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated MAPREDUCE-5364:
--

   Resolution: Fixed
Fix Version/s: 1.3.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Karthik. Committed to branch-1.

 Deadlock between RenewalTimerTask methods cancel() and run()
 

 Key: MAPREDUCE-5364
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5364
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
 Fix For: 1.3.0

 Attachments: mr-5364-1.patch


 MAPREDUCE-4860 introduced a local variable {{cancelled}} in 
 {{RenewalTimerTask}} to fix the race where {{DelegationTokenRenewal}} 
 attempts to renew a token even after the job is removed. However, the patch 
 also makes {{run()}} and {{cancel()}} synchronized methods leading to a 
 potential deadlock against {{run()}}'s catch-block (error-path).
 The deadlock stacks below:
 {noformat}
  - 
 org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$RenewalTimerTask.cancel()
  @bci=0, line=240 (Interpreted frame)
  - 
 org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal.removeDelegationTokenRenewalForJob(org.apache.hadoop.mapreduce.JobID)
  @bci=109, line=319 (Interpreted frame)
 {noformat}
 {noformat}
  - 
 org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal.removeFailedDelegationToken(org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$DelegationTokenToRenew)
  @bci=62, line=297 (Interpreted frame)
  - 
 org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal.access$300(org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$DelegationTokenToRenew)
  @bci=1, line=47 (Interpreted frame)
  - 
 org.apache.hadoop.mapreduce.security.token.DelegationTokenRenewal$RenewalTimerTask.run()
  @bci=148, line=234 (Interpreted frame)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5278) Distributed cache is broken when JT staging dir is not on the default FS

2013-07-03 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated MAPREDUCE-5278:
---

Attachment: MAPREDUCE-5278.4.patch

 Distributed cache is broken when JT staging dir is not on the default FS
 

 Key: MAPREDUCE-5278
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5278
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: distributed-cache
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: MAPREDUCE-5278.2.patch, MAPREDUCE-5278.3.patch, 
 MAPREDUCE-5278.4.patch, MAPREDUCE-5278.patch


 Today, the JobTracker staging dir (mapreduce.jobtracker.staging.root.dir) is 
 set to point to HDFS, even though other file systems (e.g. Amazon S3 file 
 system and Windows ASV file system) are the default file systems.
 For ASV, this config was chosen and there are a few reasons why:
 1. To prevent leak of the storage account credentials to the user's storage 
 account; 
 2. It uses HDFS for the transient job files what is good for two reasons – a) 
 it does not flood the user's storage account with irrelevant data/files b) it 
 leverages HDFS locality for small files
 However, this approach conflicts with how distributed cache caching works, 
 completely negating the feature's functionality.
 When files are added to the distributed cache (thru files/achieves/libjars 
 hadoop generic options), they are copied to the job tracker staging dir only 
 if they reside on a file system different that the jobtracker's. Later on, 
 this path is used as a key to cache the files locally on the tasktracker's 
 machine, and avoid localization (download/unzip) of the distributed cache 
 files if they are already localized.
 In this configuration the caching is completely disabled and we always end up 
 copying dist cache files to the job tracker's staging dir first and 
 localizing them on the task tracker machine second.
 This is especially not good for Oozie scenarios as Oozie uses dist cache to 
 populate Hive/Pig jars throughout the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5278) Distributed cache is broken when JT staging dir is not on the default FS

2013-07-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699502#comment-13699502
 ] 

Hadoop QA commented on MAPREDUCE-5278:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12590727/MAPREDUCE-5278.4.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3828//console

This message is automatically generated.

 Distributed cache is broken when JT staging dir is not on the default FS
 

 Key: MAPREDUCE-5278
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5278
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: distributed-cache
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: MAPREDUCE-5278.2.patch, MAPREDUCE-5278.3.patch, 
 MAPREDUCE-5278.4.patch, MAPREDUCE-5278.patch


 Today, the JobTracker staging dir (mapreduce.jobtracker.staging.root.dir) is 
 set to point to HDFS, even though other file systems (e.g. Amazon S3 file 
 system and Windows ASV file system) are the default file systems.
 For ASV, this config was chosen and there are a few reasons why:
 1. To prevent leak of the storage account credentials to the user's storage 
 account; 
 2. It uses HDFS for the transient job files what is good for two reasons – a) 
 it does not flood the user's storage account with irrelevant data/files b) it 
 leverages HDFS locality for small files
 However, this approach conflicts with how distributed cache caching works, 
 completely negating the feature's functionality.
 When files are added to the distributed cache (thru files/achieves/libjars 
 hadoop generic options), they are copied to the job tracker staging dir only 
 if they reside on a file system different that the jobtracker's. Later on, 
 this path is used as a key to cache the files locally on the tasktracker's 
 machine, and avoid localization (download/unzip) of the distributed cache 
 files if they are already localized.
 In this configuration the caching is completely disabled and we always end up 
 copying dist cache files to the job tracker's staging dir first and 
 localizing them on the task tracker machine second.
 This is especially not good for Oozie scenarios as Oozie uses dist cache to 
 populate Hive/Pig jars throughout the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5367) Local jobs all use same local working directory

2013-07-03 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699517#comment-13699517
 ] 

Sandy Ryza commented on MAPREDUCE-5367:
---

This was fixed MAPREDUCE-4278, which adds a random number to the job ID.

 Local jobs all use same local working directory
 ---

 Key: MAPREDUCE-5367
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5367
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 This means that local jobs, even in different JVMs, can't run concurrently 
 because they might delete each other's files during work directory setup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5367) Local jobs all use same local working directory

2013-07-03 Thread Ben Podgursky (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699523#comment-13699523
 ] 

Ben Podgursky commented on MAPREDUCE-5367:
--

Oh awesome, didn't know that was fixed.  UUID is definitely unnecessary then.

 Local jobs all use same local working directory
 ---

 Key: MAPREDUCE-5367
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5367
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Sandy Ryza
Assignee: Sandy Ryza

 This means that local jobs, even in different JVMs, can't run concurrently 
 because they might delete each other's files during work directory setup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5278) Distributed cache is broken when JT staging dir is not on the default FS

2013-07-03 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699544#comment-13699544
 ] 

Xi Fang commented on MAPREDUCE-5278:


Thanks Bikas. A new patch was attached. 

 Distributed cache is broken when JT staging dir is not on the default FS
 

 Key: MAPREDUCE-5278
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5278
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: distributed-cache
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: MAPREDUCE-5278.2.patch, MAPREDUCE-5278.3.patch, 
 MAPREDUCE-5278.4.patch, MAPREDUCE-5278.patch


 Today, the JobTracker staging dir (mapreduce.jobtracker.staging.root.dir) is 
 set to point to HDFS, even though other file systems (e.g. Amazon S3 file 
 system and Windows ASV file system) are the default file systems.
 For ASV, this config was chosen and there are a few reasons why:
 1. To prevent leak of the storage account credentials to the user's storage 
 account; 
 2. It uses HDFS for the transient job files what is good for two reasons – a) 
 it does not flood the user's storage account with irrelevant data/files b) it 
 leverages HDFS locality for small files
 However, this approach conflicts with how distributed cache caching works, 
 completely negating the feature's functionality.
 When files are added to the distributed cache (thru files/achieves/libjars 
 hadoop generic options), they are copied to the job tracker staging dir only 
 if they reside on a file system different that the jobtracker's. Later on, 
 this path is used as a key to cache the files locally on the tasktracker's 
 machine, and avoid localization (download/unzip) of the distributed cache 
 files if they are already localized.
 In this configuration the caching is completely disabled and we always end up 
 copying dist cache files to the job tracker's staging dir first and 
 localizing them on the task tracker machine second.
 This is especially not good for Oozie scenarios as Oozie uses dist cache to 
 populate Hive/Pig jars throughout the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (MAPREDUCE-5373) TestFetchFailure.testFetchFailureMultipleReduces could fail intermittently

2013-07-03 Thread Chuan Liu (JIRA)
Chuan Liu created MAPREDUCE-5373:


 Summary: TestFetchFailure.testFetchFailureMultipleReduces could 
fail intermittently
 Key: MAPREDUCE-5373
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5373
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Chuan Liu


The unit test case could fail intermittently on both Linux and Windows in my 
testing. The error message seems suggesting the task status was wrong during 
testing.

An example Linux failure:
{noformat}
---
Test set: org.apache.hadoop.mapreduce.v2.app.TestFetchFailure
---
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.235 sec  
FAILURE!
testFetchFailureMultipleReduces(org.apache.hadoop.mapreduce.v2.app.TestFetchFailure)
  Time elapsed: 1261 sec   FAILURE!
java.lang.AssertionError: expected:SUCCEEDED but was:SCHEDULED
  at org.junit.Assert.fail(Assert.java:93)
  at org.junit.Assert.failNotEquals(Assert.java:647)
  at org.junit.Assert.assertEquals(Assert.java:128)
  at org.junit.Assert.assertEquals(Assert.java:147)
  at 
org.apache.hadoop.mapreduce.v2.app.TestFetchFailure.testFetchFailureMultipleReduces(TestFetchFailure.java:332)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
  at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
  at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
  at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
  at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
  at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
  at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
  at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
  at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
  at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
  at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
  at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
  at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
  at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
  at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
{noformat}

An example Windows failure:
{noformat}
---
Test set: org.apache.hadoop.mapreduce.v2.app.TestFetchFailure
---
Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 50.342 sec  
FAILURE!
testFetchFailureMultipleReduces(org.apache.hadoop.mapreduce.v2.app.TestFetchFailure)
  Time elapsed: 36175 sec   FAILURE!
java.lang.AssertionError: expected:SUCCEEDED but was:RUNNING
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.hadoop.mapreduce.v2.app.TestFetchFailure.testFetchFailureMultipleReduces(TestFetchFailure.java:332)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 

[jira] [Commented] (MAPREDUCE-5330) JVM manager should not forcefully kill the process on Signal.TERM on Windows

2013-07-03 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699755#comment-13699755
 ] 

Chris Nauroth commented on MAPREDUCE-5330:
--

I came across similar issues while working on the YARN nodemanager changes for 
Windows.  Bikas, I agree that this logic doesn't exactly match the meaning of 
SIGTERM.  To match SIGTERM, we really need a way for one process to signal 
another process with some graceful shutdown message, and a way for the other 
process to trigger custom code when it receives that message.  Unfortunately, 
I'm not aware of anything in the Windows API that provides an exact match.  
Therefore, the logic in this patch seems to be the closest approximation that's 
feasible right now.

To elaborate on this, {{TerminateProcess}} immediately kills the target 
process, and there is no way for that process to trap the call and run custom 
clean-up code.

http://msdn.microsoft.com/en-us/library/windows/desktop/ms686714(v=vs.85).aspx

This is much different from Unix signals, which allow the target process to 
install signal handlers to respond gracefully to things like SIGTERM.

There also seems to be some support for programmatically sending CTL-C to a 
process and installing a custom handler to respond to it.  This would be 
{{SetConsoleCtrlHandler}} and {{GenerateConsoleCtrlEvent}}.  I've heard 
anecdotally that this can be used to create a rough approximation of Unix 
signals, but I haven't tried it myself.

http://msdn.microsoft.com/en-us/library/windows/desktop/ms686016(v=vs.85).aspx

http://msdn.microsoft.com/en-us/library/windows/desktop/ms683155(v=vs.85).aspx

Aside from that, the only other option seems to be for Windows applications to 
roll their own custom IPC protocol (i.e. one process sends another a custom 
graceful shutdown message over a named pipe).

It might be worth pursuing one of these solutions in the long term for absolute 
correctness, but these approaches will require a lot more coding and testing.

Xi, please let me know if I've missed anything regarding signaling capabilities 
in the Windows API.


 JVM manager should not forcefully kill the process on Signal.TERM on Windows
 

 Key: MAPREDUCE-5330
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5330
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: MAPREDUCE-5330.patch


 In MapReduce, we sometimes kill a task's JVM before it naturally shuts down 
 if we want to launch other tasks (look in 
 JvmManager$JvmManagerForType.reapJvm). This behavior means that if the map 
 task process is in the middle of doing some cleanup/finalization after the 
 task is done, it might be interrupted/killed without giving it a chance. 
 In the Microsoft's Hadoop Service, after a Map/Reduce task is done and during 
 closing file systems in a special shutdown hook, we're typically uploading 
 storage (ASV in our context) usage metrics to Microsoft Azure Tables. So if 
 this kill happens these metrics get lost. The impact is that for many MR jobs 
 we don't see accurate metrics reported most of the time.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5221) Reduce side Combiner is not used when using the new API

2013-07-03 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13699781#comment-13699781
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5221:
---

[~kkambatl], OK, I'll do it.

 Reduce side Combiner is not used when using the new API
 ---

 Key: MAPREDUCE-5221
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5221
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
Assignee: Tsuyoshi OZAWA
 Attachments: MAPREDUCE-5221.1.patch, MAPREDUCE-5221.2.patch, 
 MAPREDUCE-5221.3.patch, MAPREDUCE-5221.4.patch


 If a combiner is specified using o.a.h.mapreduce.Job.setCombinerClass - this 
 will silently ignored on the reduce side since the reduce side usage is only 
 aware of the old api combiner.
 This doesn't fail the job - since the new combiner key does not deprecate the 
 old key.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira