[jira] [Commented] (MAPREDUCE-6255) Fix JobCounter's format to use grouping separator

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319686#comment-14319686
 ] 

Hudson commented on MAPREDUCE-6255:
---

FAILURE: Integrated in Hadoop-trunk-Commit #7104 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7104/])
MAPREDUCE-6255. Fix JobCounter's format to use grouping separator. Contributed 
by Ryu Kobayashi. (ozawa: rev ba3c80a5ca8fefae9d0e67232f9973e1a0458f58)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/CountersBlock.java


> Fix JobCounter's format to use grouping separator
> -
>
> Key: MAPREDUCE-6255
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6255
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Ryu Kobayashi
>Assignee: Ryu Kobayashi
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: YARN-3186.patch
>
>
> MRv2 Job Counter has not been comma format, this is hard to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6255) Fix JobCounter's format to use grouping separator

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-6255:
--
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

> Fix JobCounter's format to use grouping separator
> -
>
> Key: MAPREDUCE-6255
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6255
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Ryu Kobayashi
>Assignee: Ryu Kobayashi
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: YARN-3186.patch
>
>
> MRv2 Job Counter has not been comma format, this is hard to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6255) Fix JobCounter's format to use grouping separator

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319680#comment-14319680
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-6255:
---

Committed this to trunk and branch-2. Thanks [~ryu_kobayashi] for your 
contribution and thanks [~brahmareddy] and [~devaraj.k] for your review and 
ticket management!

> Fix JobCounter's format to use grouping separator
> -
>
> Key: MAPREDUCE-6255
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6255
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Ryu Kobayashi
>Assignee: Ryu Kobayashi
>Priority: Minor
> Attachments: YARN-3186.patch
>
>
> MRv2 Job Counter has not been comma format, this is hard to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6255) Fix JobCounter's format to use grouping separator

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-6255:
--
Summary: Fix JobCounter's format to use grouping separator  (was: MRv2 Job 
Counter display format)

> Fix JobCounter's format to use grouping separator
> -
>
> Key: MAPREDUCE-6255
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6255
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Ryu Kobayashi
>Assignee: Ryu Kobayashi
>Priority: Minor
> Attachments: YARN-3186.patch
>
>
> MRv2 Job Counter has not been comma format, this is hard to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6260) Convert site documentation to markdown

2015-02-12 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created MAPREDUCE-6260:
---

 Summary: Convert site documentation to markdown
 Key: MAPREDUCE-6260
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6260
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Allen Wittenauer






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6255) MRv2 Job Counter display format

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319646#comment-14319646
 ] 

Hadoop QA commented on MAPREDUCE-6255:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698323/YARN-3186.patch
  against trunk revision 110cf6b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5193//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5193//console

This message is automatically generated.

> MRv2 Job Counter display format
> ---
>
> Key: MAPREDUCE-6255
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6255
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Ryu Kobayashi
>Assignee: Ryu Kobayashi
>Priority: Minor
> Attachments: YARN-3186.patch
>
>
> MRv2 Job Counter has not been comma format, this is hard to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6259) -1 job submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6259:
-
Description: 
-1 job submit time cause IllegalArgumentException when parse the Job history 
file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
We found the following job history file name which cause 
IllegalArgumentException when parse the job status in the job history file name.
{code}
job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
{code}
The stack trace for the IllegalArgumentException is
{code}
2015-02-10 04:54:01,863 WARN org.apache.hadoop.mapreduce.v2.hs.PartialJob: 
Exception while parsing job state. Defaulting to KILLED
java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.mapreduce.v2.api.records.JobState.0
at java.lang.Enum.valueOf(Enum.java:236)
at 
org.apache.hadoop.mapreduce.v2.api.records.JobState.valueOf(JobState.java:21)
at 
org.apache.hadoop.mapreduce.v2.hs.PartialJob.getState(PartialJob.java:82)
at 
org.apache.hadoop.mapreduce.v2.hs.PartialJob.(PartialJob.java:59)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getAllPartialJobs(CachedHistoryStorage.java:159)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getPartialJobs(CachedHistoryStorage.java:173)
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistory.getPartialJobs(JobHistory.java:284)
at 
org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices.getJobs(HsWebServices.java:212)
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
at 
com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
at 
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1223)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.Servle

[jira] [Updated] (MAPREDUCE-6255) MRv2 Job Counter display format

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-6255:
--
Hadoop Flags: Reviewed

> MRv2 Job Counter display format
> ---
>
> Key: MAPREDUCE-6255
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6255
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Ryu Kobayashi
>Assignee: Ryu Kobayashi
>Priority: Minor
> Attachments: YARN-3186.patch
>
>
> MRv2 Job Counter has not been comma format, this is hard to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6259) -1 job submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6259:
-
Description: 
-1 job submit time cause IllegalArgumentException when parse the Job history 
file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
We found the following job history file name which cause 
IllegalArgumentException when parse the job status in the job history file name.
{code}
job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
{code}
The stack trace for the IllegalArgumentException is
{code}
2015-02-10 04:54:01,863 WARN org.apache.hadoop.mapreduce.v2.hs.PartialJob: 
Exception while parsing job state. Defaulting to KILLED
java.lang.IllegalArgumentException: No enum constant 
org.apache.hadoop.mapreduce.v2.api.records.JobState.0
at java.lang.Enum.valueOf(Enum.java:236)
at 
org.apache.hadoop.mapreduce.v2.api.records.JobState.valueOf(JobState.java:21)
at 
org.apache.hadoop.mapreduce.v2.hs.PartialJob.getState(PartialJob.java:82)
at 
org.apache.hadoop.mapreduce.v2.hs.PartialJob.(PartialJob.java:59)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getAllPartialJobs(CachedHistoryStorage.java:159)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getPartialJobs(CachedHistoryStorage.java:173)
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistory.getPartialJobs(JobHistory.java:284)
at 
org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices.getJobs(HsWebServices.java:212)
at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at 
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
at 
com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
at 
com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
at 
com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
at 
com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1223)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.Servle

[jira] [Updated] (MAPREDUCE-6259) -1 job submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause job submit time is not updated in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6259:
-
Summary: -1 job submit time cause IllegalArgumentException when parse the 
Job history file name and JOB_INIT_FAILED cause job submit time is not updated 
in JobIndexInfo.  (was: -1 submit time cause IllegalArgumentException when 
parse the Job history file name and JOB_INIT_FAILED cause submit time is not 
updated in JobIndexInfo.)

> -1 job submit time cause IllegalArgumentException when parse the Job history 
> file name and JOB_INIT_FAILED cause job submit time is not updated in 
> JobIndexInfo.
> 
>
> Key: MAPREDUCE-6259
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6259
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: zhihai xu
>Assignee: zhihai xu
>
> -1 submit time cause IllegalArgumentException when parse the Job history file 
> name and JOB_INIT_FAILED cause submit time is not updated in JobIndexInfo.
> We found the following job history file name which cause 
> IllegalArgumentException when parse the job status in the job history file 
> name.
> {code}
> job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
> {code}
> when IOException happened in JobImpl#setup, the Job submit time in 
> JobHistoryEventHandler#MetaInfo#JobIndexInfo will not be changed and the Job 
> submit time will be its [initial value 
> -1|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java#L1185].
> {code}
>   this.jobIndexInfo =
>   new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
>queueName);
> {code}
> The following is the sequence to get -1 submit time:
> 1. 
> a job is created at MRAppMaster#serviceStart and  the new job is at state 
> JobStateInternal.NEW after created
> {code}
> job = createJob(getConfig(), forcedState, shutDownMessage);
> {code}
> 2.
> JobEventType.JOB_INIT is sent to JobImpl from MRAppMaster#serviceStart
> {code}
>   JobEvent initJobEvent = new JobEvent(job.getID(), 
> JobEventType.JOB_INIT);
>   // Send init to the job (this does NOT trigger job execution)
>   // This is a synchronous call, not an event through dispatcher. We want
>   // job-init to be done completely here.
>   jobEventDispatcher.handle(initJobEvent);
> {code}
> 3.
> after JobImpl received JobEventType.JOB_INIT, it call 
> InitTransition#transition
> {code}
>   .addTransition
>   (JobStateInternal.NEW,
>   EnumSet.of(JobStateInternal.INITED, JobStateInternal.NEW),
>   JobEventType.JOB_INIT,
>   new InitTransition())
> {code}
> 4.
> then the exception happen from setup(job) in InitTransition#transition before 
> JobSubmittedEvent is handled.
> JobSubmittedEvent will update the job submit time. Due to the exception, the 
> submit time is still the initial value -1.
> This is the code InitTransition#transition
> {code}
> public JobStateInternal transition(JobImpl job, JobEvent event) {
>   job.metrics.submittedJob(job);
>   job.metrics.preparingJob(job);
>   if (job.newApiCommitter) {
> job.jobContext = new JobContextImpl(job.conf, job.oldJobId);
>   } else {
> job.jobContext = new 
> org.apache.hadoop.mapred.JobContextImpl(job.conf, job.oldJobId);
>   }
>   try {
> setup(job);
> job.fs = job.getFileSystem(job.conf);
> //log to job history
> JobSubmittedEvent jse = new JobSubmittedEvent(job.oldJobId,
>   job.conf.get(MRJobConfig.JOB_NAME, "test"), 
> job.conf.get(MRJobConfig.USER_NAME, "mapred"),
> job.appSubmitTime,
> job.remoteJobConfFile.toString(),
> job.jobACLs, job.queueName,
> job.conf.get(MRJobConfig.WORKFLOW_ID, ""),
> job.conf.get(MRJobConfig.WORKFLOW_NAME, ""),
> job.conf.get(MRJobConfig.WORKFLOW_NODE_NAME, ""),
> getWorkflowAdjacencies(job.conf),
> job.conf.get(MRJobConfig.WORKFLOW_TAGS, ""));
> job.eventHandler.handle(new JobHistoryEvent(job.jobId, jse));
> //TODO JH Verify jobACLs, UserName via UGI?
> TaskSplitMetaInfo[] taskSplitMetaInfo = createSplits(job, job.jobId);
> job.numMapTasks = taskSplitMetaInfo.length;
> job.numReduceTasks = job.conf.getInt(MRJobConfig.NUM_REDUCES, 0);
> if (job.numMapTasks == 0 && job.numReduceTasks ==

[jira] [Updated] (MAPREDUCE-6259) -1 job submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6259:
-
Summary: -1 job submit time cause IllegalArgumentException when parse the 
Job history file name and JOB_INIT_FAILED cause -1 job submit time in 
JobIndexInfo.  (was: -1 job submit time cause IllegalArgumentException when 
parse the Job history file name and JOB_INIT_FAILED cause job submit time is 
not updated in JobIndexInfo.)

> -1 job submit time cause IllegalArgumentException when parse the Job history 
> file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
> 
>
> Key: MAPREDUCE-6259
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6259
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: zhihai xu
>Assignee: zhihai xu
>
> -1 job submit time cause IllegalArgumentException when parse the Job history 
> file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
> We found the following job history file name which cause 
> IllegalArgumentException when parse the job status in the job history file 
> name.
> {code}
> job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
> {code}
> when IOException happened in JobImpl#setup, the Job submit time in 
> JobHistoryEventHandler#MetaInfo#JobIndexInfo will not be changed and the Job 
> submit time will be its [initial value 
> -1|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java#L1185].
> {code}
>   this.jobIndexInfo =
>   new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
>queueName);
> {code}
> The following is the sequence to get -1 submit time:
> 1. 
> a job is created at MRAppMaster#serviceStart and  the new job is at state 
> JobStateInternal.NEW after created
> {code}
> job = createJob(getConfig(), forcedState, shutDownMessage);
> {code}
> 2.
> JobEventType.JOB_INIT is sent to JobImpl from MRAppMaster#serviceStart
> {code}
>   JobEvent initJobEvent = new JobEvent(job.getID(), 
> JobEventType.JOB_INIT);
>   // Send init to the job (this does NOT trigger job execution)
>   // This is a synchronous call, not an event through dispatcher. We want
>   // job-init to be done completely here.
>   jobEventDispatcher.handle(initJobEvent);
> {code}
> 3.
> after JobImpl received JobEventType.JOB_INIT, it call 
> InitTransition#transition
> {code}
>   .addTransition
>   (JobStateInternal.NEW,
>   EnumSet.of(JobStateInternal.INITED, JobStateInternal.NEW),
>   JobEventType.JOB_INIT,
>   new InitTransition())
> {code}
> 4.
> then the exception happen from setup(job) in InitTransition#transition before 
> JobSubmittedEvent is handled.
> JobSubmittedEvent will update the job submit time. Due to the exception, the 
> submit time is still the initial value -1.
> This is the code InitTransition#transition
> {code}
> public JobStateInternal transition(JobImpl job, JobEvent event) {
>   job.metrics.submittedJob(job);
>   job.metrics.preparingJob(job);
>   if (job.newApiCommitter) {
> job.jobContext = new JobContextImpl(job.conf, job.oldJobId);
>   } else {
> job.jobContext = new 
> org.apache.hadoop.mapred.JobContextImpl(job.conf, job.oldJobId);
>   }
>   try {
> setup(job);
> job.fs = job.getFileSystem(job.conf);
> //log to job history
> JobSubmittedEvent jse = new JobSubmittedEvent(job.oldJobId,
>   job.conf.get(MRJobConfig.JOB_NAME, "test"), 
> job.conf.get(MRJobConfig.USER_NAME, "mapred"),
> job.appSubmitTime,
> job.remoteJobConfFile.toString(),
> job.jobACLs, job.queueName,
> job.conf.get(MRJobConfig.WORKFLOW_ID, ""),
> job.conf.get(MRJobConfig.WORKFLOW_NAME, ""),
> job.conf.get(MRJobConfig.WORKFLOW_NODE_NAME, ""),
> getWorkflowAdjacencies(job.conf),
> job.conf.get(MRJobConfig.WORKFLOW_TAGS, ""));
> job.eventHandler.handle(new JobHistoryEvent(job.jobId, jse));
> //TODO JH Verify jobACLs, UserName via UGI?
> TaskSplitMetaInfo[] taskSplitMetaInfo = createSplits(job, job.jobId);
> job.numMapTasks = taskSplitMetaInfo.length;
> job.numReduceTasks = job.conf.getInt(MRJobConfig.NUM_REDUCES, 0);
> if (job.numMapTasks == 0 && job.numReduceTasks == 0) {
>   job.addDiagnostic

[jira] [Updated] (MAPREDUCE-6259) -1 submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause submit time is not updated in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6259:
-
Description: 
-1 submit time cause IllegalArgumentException when parse the Job history file 
name and JOB_INIT_FAILED cause submit time is not updated in JobIndexInfo.
We found the following job history file name which cause 
IllegalArgumentException when parse the job status in the job history file name.
{code}
job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
{code}

when IOException happened in JobImpl#setup, the Job submit time in 
JobHistoryEventHandler#MetaInfo#JobIndexInfo will not be changed and the Job 
submit time will be its initial value 
[-1|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java#L1185].
{code}
  this.jobIndexInfo =
  new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
   queueName);
{code}

The following is the sequence to get -1 submit time:
1. 
a job is created at MRAppMaster#serviceStart and  the new job is at state 
JobStateInternal.NEW after created
{code}
job = createJob(getConfig(), forcedState, shutDownMessage);
{code}

2.
JobEventType.JOB_INIT is sent to JobImpl from MRAppMaster#serviceStart
{code}
  JobEvent initJobEvent = new JobEvent(job.getID(), JobEventType.JOB_INIT);
  // Send init to the job (this does NOT trigger job execution)
  // This is a synchronous call, not an event through dispatcher. We want
  // job-init to be done completely here.
  jobEventDispatcher.handle(initJobEvent);
{code}

3.
after JobImpl received JobEventType.JOB_INIT, it call InitTransition#transition
{code}
  .addTransition
  (JobStateInternal.NEW,
  EnumSet.of(JobStateInternal.INITED, JobStateInternal.NEW),
  JobEventType.JOB_INIT,
  new InitTransition())
{code}

4.
then the exception happen from setup(job) in InitTransition#transition before 
JobSubmittedEvent is handled.
JobSubmittedEvent will update the job submit time. Due to the exception, the 
submit time is still the initial value -1.
This is the code InitTransition#transition
{code}
public JobStateInternal transition(JobImpl job, JobEvent event) {
  job.metrics.submittedJob(job);
  job.metrics.preparingJob(job);
  if (job.newApiCommitter) {
job.jobContext = new JobContextImpl(job.conf, job.oldJobId);
  } else {
job.jobContext = new org.apache.hadoop.mapred.JobContextImpl(job.conf, 
job.oldJobId);
  }
  try {
setup(job);
job.fs = job.getFileSystem(job.conf);
//log to job history
JobSubmittedEvent jse = new JobSubmittedEvent(job.oldJobId,
  job.conf.get(MRJobConfig.JOB_NAME, "test"), 
job.conf.get(MRJobConfig.USER_NAME, "mapred"),
job.appSubmitTime,
job.remoteJobConfFile.toString(),
job.jobACLs, job.queueName,
job.conf.get(MRJobConfig.WORKFLOW_ID, ""),
job.conf.get(MRJobConfig.WORKFLOW_NAME, ""),
job.conf.get(MRJobConfig.WORKFLOW_NODE_NAME, ""),
getWorkflowAdjacencies(job.conf),
job.conf.get(MRJobConfig.WORKFLOW_TAGS, ""));
job.eventHandler.handle(new JobHistoryEvent(job.jobId, jse));
//TODO JH Verify jobACLs, UserName via UGI?

TaskSplitMetaInfo[] taskSplitMetaInfo = createSplits(job, job.jobId);
job.numMapTasks = taskSplitMetaInfo.length;
job.numReduceTasks = job.conf.getInt(MRJobConfig.NUM_REDUCES, 0);

if (job.numMapTasks == 0 && job.numReduceTasks == 0) {
  job.addDiagnostic("No of maps and reduces are 0 " + job.jobId);
} else if (job.numMapTasks == 0) {
  job.reduceWeight = 0.9f;
} else if (job.numReduceTasks == 0) {
  job.mapWeight = 0.9f;
} else {
  job.mapWeight = job.reduceWeight = 0.45f;
}

checkTaskLimits();

long inputLength = 0;
for (int i = 0; i < job.numMapTasks; ++i) {
  inputLength += taskSplitMetaInfo[i].getInputDataLength();
}

job.makeUberDecision(inputLength);

job.taskAttemptCompletionEvents =
new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);
job.mapAttemptCompletionEvents =
new ArrayList(job.numMapTasks + 10);
job.taskCompletionIdxToMapCompletionIdx = new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);

job.allowedMapFailuresPercent =
job.conf.getInt(MRJobConfig.MAP_FAILURES_MAX_PERCENT, 0);
job.allowedReduceFailuresPercent =
job.conf.getInt(MRJobConfig.REDUCE_FAILUR

[jira] [Updated] (MAPREDUCE-6259) -1 job submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6259:
-
Description: 
-1 job submit time cause IllegalArgumentException when parse the Job history 
file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
We found the following job history file name which cause 
IllegalArgumentException when parse the job status in the job history file name.
{code}
job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
{code}

when IOException happened in JobImpl#setup, the Job submit time in 
JobHistoryEventHandler#MetaInfo#JobIndexInfo will not be changed and the Job 
submit time will be its [initial value 
-1|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java#L1185].
{code}
  this.jobIndexInfo =
  new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
   queueName);
{code}

The following is the sequences to get -1 job submit time:
1. 
a job is created at MRAppMaster#serviceStart and  the new job is at state 
JobStateInternal.NEW after created
{code}
job = createJob(getConfig(), forcedState, shutDownMessage);
{code}

2.
JobEventType.JOB_INIT is sent to JobImpl from MRAppMaster#serviceStart
{code}
  JobEvent initJobEvent = new JobEvent(job.getID(), JobEventType.JOB_INIT);
  // Send init to the job (this does NOT trigger job execution)
  // This is a synchronous call, not an event through dispatcher. We want
  // job-init to be done completely here.
  jobEventDispatcher.handle(initJobEvent);
{code}

3.
after JobImpl received JobEventType.JOB_INIT, it call InitTransition#transition
{code}
  .addTransition
  (JobStateInternal.NEW,
  EnumSet.of(JobStateInternal.INITED, JobStateInternal.NEW),
  JobEventType.JOB_INIT,
  new InitTransition())
{code}

4.
then the exception happen from setup(job) in InitTransition#transition before 
JobSubmittedEvent is handled.
JobSubmittedEvent will update the job submit time. Due to the exception, the 
submit time is still the initial value -1.
This is the code InitTransition#transition
{code}
public JobStateInternal transition(JobImpl job, JobEvent event) {
  job.metrics.submittedJob(job);
  job.metrics.preparingJob(job);
  if (job.newApiCommitter) {
job.jobContext = new JobContextImpl(job.conf, job.oldJobId);
  } else {
job.jobContext = new org.apache.hadoop.mapred.JobContextImpl(job.conf, 
job.oldJobId);
  }
  try {
setup(job);
job.fs = job.getFileSystem(job.conf);
//log to job history
JobSubmittedEvent jse = new JobSubmittedEvent(job.oldJobId,
  job.conf.get(MRJobConfig.JOB_NAME, "test"), 
job.conf.get(MRJobConfig.USER_NAME, "mapred"),
job.appSubmitTime,
job.remoteJobConfFile.toString(),
job.jobACLs, job.queueName,
job.conf.get(MRJobConfig.WORKFLOW_ID, ""),
job.conf.get(MRJobConfig.WORKFLOW_NAME, ""),
job.conf.get(MRJobConfig.WORKFLOW_NODE_NAME, ""),
getWorkflowAdjacencies(job.conf),
job.conf.get(MRJobConfig.WORKFLOW_TAGS, ""));
job.eventHandler.handle(new JobHistoryEvent(job.jobId, jse));
//TODO JH Verify jobACLs, UserName via UGI?

TaskSplitMetaInfo[] taskSplitMetaInfo = createSplits(job, job.jobId);
job.numMapTasks = taskSplitMetaInfo.length;
job.numReduceTasks = job.conf.getInt(MRJobConfig.NUM_REDUCES, 0);

if (job.numMapTasks == 0 && job.numReduceTasks == 0) {
  job.addDiagnostic("No of maps and reduces are 0 " + job.jobId);
} else if (job.numMapTasks == 0) {
  job.reduceWeight = 0.9f;
} else if (job.numReduceTasks == 0) {
  job.mapWeight = 0.9f;
} else {
  job.mapWeight = job.reduceWeight = 0.45f;
}

checkTaskLimits();

long inputLength = 0;
for (int i = 0; i < job.numMapTasks; ++i) {
  inputLength += taskSplitMetaInfo[i].getInputDataLength();
}

job.makeUberDecision(inputLength);

job.taskAttemptCompletionEvents =
new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);
job.mapAttemptCompletionEvents =
new ArrayList(job.numMapTasks + 10);
job.taskCompletionIdxToMapCompletionIdx = new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);

job.allowedMapFailuresPercent =
job.conf.getInt(MRJobConfig.MAP_FAILURES_MAX_PERCENT, 0);
job.allowedReduceFailuresPercent =
job.conf.getInt(MRJobConfig.REDUCE_FAILU

[jira] [Updated] (MAPREDUCE-6259) -1 job submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause job submit time is not updated in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6259:
-
Description: 
-1 job submit time cause IllegalArgumentException when parse the Job history 
file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
We found the following job history file name which cause 
IllegalArgumentException when parse the job status in the job history file name.
{code}
job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
{code}

when IOException happened in JobImpl#setup, the Job submit time in 
JobHistoryEventHandler#MetaInfo#JobIndexInfo will not be changed and the Job 
submit time will be its [initial value 
-1|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java#L1185].
{code}
  this.jobIndexInfo =
  new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
   queueName);
{code}

The following is the sequence to get -1 submit time:
1. 
a job is created at MRAppMaster#serviceStart and  the new job is at state 
JobStateInternal.NEW after created
{code}
job = createJob(getConfig(), forcedState, shutDownMessage);
{code}

2.
JobEventType.JOB_INIT is sent to JobImpl from MRAppMaster#serviceStart
{code}
  JobEvent initJobEvent = new JobEvent(job.getID(), JobEventType.JOB_INIT);
  // Send init to the job (this does NOT trigger job execution)
  // This is a synchronous call, not an event through dispatcher. We want
  // job-init to be done completely here.
  jobEventDispatcher.handle(initJobEvent);
{code}

3.
after JobImpl received JobEventType.JOB_INIT, it call InitTransition#transition
{code}
  .addTransition
  (JobStateInternal.NEW,
  EnumSet.of(JobStateInternal.INITED, JobStateInternal.NEW),
  JobEventType.JOB_INIT,
  new InitTransition())
{code}

4.
then the exception happen from setup(job) in InitTransition#transition before 
JobSubmittedEvent is handled.
JobSubmittedEvent will update the job submit time. Due to the exception, the 
submit time is still the initial value -1.
This is the code InitTransition#transition
{code}
public JobStateInternal transition(JobImpl job, JobEvent event) {
  job.metrics.submittedJob(job);
  job.metrics.preparingJob(job);
  if (job.newApiCommitter) {
job.jobContext = new JobContextImpl(job.conf, job.oldJobId);
  } else {
job.jobContext = new org.apache.hadoop.mapred.JobContextImpl(job.conf, 
job.oldJobId);
  }
  try {
setup(job);
job.fs = job.getFileSystem(job.conf);
//log to job history
JobSubmittedEvent jse = new JobSubmittedEvent(job.oldJobId,
  job.conf.get(MRJobConfig.JOB_NAME, "test"), 
job.conf.get(MRJobConfig.USER_NAME, "mapred"),
job.appSubmitTime,
job.remoteJobConfFile.toString(),
job.jobACLs, job.queueName,
job.conf.get(MRJobConfig.WORKFLOW_ID, ""),
job.conf.get(MRJobConfig.WORKFLOW_NAME, ""),
job.conf.get(MRJobConfig.WORKFLOW_NODE_NAME, ""),
getWorkflowAdjacencies(job.conf),
job.conf.get(MRJobConfig.WORKFLOW_TAGS, ""));
job.eventHandler.handle(new JobHistoryEvent(job.jobId, jse));
//TODO JH Verify jobACLs, UserName via UGI?

TaskSplitMetaInfo[] taskSplitMetaInfo = createSplits(job, job.jobId);
job.numMapTasks = taskSplitMetaInfo.length;
job.numReduceTasks = job.conf.getInt(MRJobConfig.NUM_REDUCES, 0);

if (job.numMapTasks == 0 && job.numReduceTasks == 0) {
  job.addDiagnostic("No of maps and reduces are 0 " + job.jobId);
} else if (job.numMapTasks == 0) {
  job.reduceWeight = 0.9f;
} else if (job.numReduceTasks == 0) {
  job.mapWeight = 0.9f;
} else {
  job.mapWeight = job.reduceWeight = 0.45f;
}

checkTaskLimits();

long inputLength = 0;
for (int i = 0; i < job.numMapTasks; ++i) {
  inputLength += taskSplitMetaInfo[i].getInputDataLength();
}

job.makeUberDecision(inputLength);

job.taskAttemptCompletionEvents =
new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);
job.mapAttemptCompletionEvents =
new ArrayList(job.numMapTasks + 10);
job.taskCompletionIdxToMapCompletionIdx = new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);

job.allowedMapFailuresPercent =
job.conf.getInt(MRJobConfig.MAP_FAILURES_MAX_PERCENT, 0);
job.allowedReduceFailuresPercent =
job.conf.getInt(MRJobConfig.REDUCE_FAILURES_M

[jira] [Updated] (MAPREDUCE-6259) -1 submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause submit time is not updated in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6259:
-
Description: 
-1 submit time cause IllegalArgumentException when parse the Job history file 
name and JOB_INIT_FAILED cause submit time is not updated in JobIndexInfo.
We found the following job history file name which cause 
IllegalArgumentException when parse the job status in the job history file name.
{code}
job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
{code}

when IOException happened in JobImpl#setup, the Job submit time in 
JobHistoryEventHandler#MetaInfo#JobIndexInfo will not be changed and the Job 
submit time will be its [initial value 
-1|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java#L1185].
{code}
  this.jobIndexInfo =
  new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
   queueName);
{code}

The following is the sequence to get -1 submit time:
1. 
a job is created at MRAppMaster#serviceStart and  the new job is at state 
JobStateInternal.NEW after created
{code}
job = createJob(getConfig(), forcedState, shutDownMessage);
{code}

2.
JobEventType.JOB_INIT is sent to JobImpl from MRAppMaster#serviceStart
{code}
  JobEvent initJobEvent = new JobEvent(job.getID(), JobEventType.JOB_INIT);
  // Send init to the job (this does NOT trigger job execution)
  // This is a synchronous call, not an event through dispatcher. We want
  // job-init to be done completely here.
  jobEventDispatcher.handle(initJobEvent);
{code}

3.
after JobImpl received JobEventType.JOB_INIT, it call InitTransition#transition
{code}
  .addTransition
  (JobStateInternal.NEW,
  EnumSet.of(JobStateInternal.INITED, JobStateInternal.NEW),
  JobEventType.JOB_INIT,
  new InitTransition())
{code}

4.
then the exception happen from setup(job) in InitTransition#transition before 
JobSubmittedEvent is handled.
JobSubmittedEvent will update the job submit time. Due to the exception, the 
submit time is still the initial value -1.
This is the code InitTransition#transition
{code}
public JobStateInternal transition(JobImpl job, JobEvent event) {
  job.metrics.submittedJob(job);
  job.metrics.preparingJob(job);
  if (job.newApiCommitter) {
job.jobContext = new JobContextImpl(job.conf, job.oldJobId);
  } else {
job.jobContext = new org.apache.hadoop.mapred.JobContextImpl(job.conf, 
job.oldJobId);
  }
  try {
setup(job);
job.fs = job.getFileSystem(job.conf);
//log to job history
JobSubmittedEvent jse = new JobSubmittedEvent(job.oldJobId,
  job.conf.get(MRJobConfig.JOB_NAME, "test"), 
job.conf.get(MRJobConfig.USER_NAME, "mapred"),
job.appSubmitTime,
job.remoteJobConfFile.toString(),
job.jobACLs, job.queueName,
job.conf.get(MRJobConfig.WORKFLOW_ID, ""),
job.conf.get(MRJobConfig.WORKFLOW_NAME, ""),
job.conf.get(MRJobConfig.WORKFLOW_NODE_NAME, ""),
getWorkflowAdjacencies(job.conf),
job.conf.get(MRJobConfig.WORKFLOW_TAGS, ""));
job.eventHandler.handle(new JobHistoryEvent(job.jobId, jse));
//TODO JH Verify jobACLs, UserName via UGI?

TaskSplitMetaInfo[] taskSplitMetaInfo = createSplits(job, job.jobId);
job.numMapTasks = taskSplitMetaInfo.length;
job.numReduceTasks = job.conf.getInt(MRJobConfig.NUM_REDUCES, 0);

if (job.numMapTasks == 0 && job.numReduceTasks == 0) {
  job.addDiagnostic("No of maps and reduces are 0 " + job.jobId);
} else if (job.numMapTasks == 0) {
  job.reduceWeight = 0.9f;
} else if (job.numReduceTasks == 0) {
  job.mapWeight = 0.9f;
} else {
  job.mapWeight = job.reduceWeight = 0.45f;
}

checkTaskLimits();

long inputLength = 0;
for (int i = 0; i < job.numMapTasks; ++i) {
  inputLength += taskSplitMetaInfo[i].getInputDataLength();
}

job.makeUberDecision(inputLength);

job.taskAttemptCompletionEvents =
new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);
job.mapAttemptCompletionEvents =
new ArrayList(job.numMapTasks + 10);
job.taskCompletionIdxToMapCompletionIdx = new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);

job.allowedMapFailuresPercent =
job.conf.getInt(MRJobConfig.MAP_FAILURES_MAX_PERCENT, 0);
job.allowedReduceFailuresPercent =
job.conf.getInt(MRJobConfig.REDUCE_FAILUR

[jira] [Updated] (MAPREDUCE-6255) MRv2 Job Counter display format

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-6255:
--
Status: Patch Available  (was: Open)

> MRv2 Job Counter display format
> ---
>
> Key: MAPREDUCE-6255
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6255
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Ryu Kobayashi
>Priority: Minor
> Attachments: YARN-3186.patch
>
>
> MRv2 Job Counter has not been comma format, this is hard to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6255) MRv2 Job Counter display format

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-6255:
--
Assignee: Ryu Kobayashi

> MRv2 Job Counter display format
> ---
>
> Key: MAPREDUCE-6255
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6255
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Ryu Kobayashi
>Assignee: Ryu Kobayashi
>Priority: Minor
> Attachments: YARN-3186.patch
>
>
> MRv2 Job Counter has not been comma format, this is hard to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6255) MRv2 Job Counter display format

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319614#comment-14319614
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-6255:
---

after CI passed.

> MRv2 Job Counter display format
> ---
>
> Key: MAPREDUCE-6255
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6255
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Ryu Kobayashi
>Priority: Minor
> Attachments: YARN-3186.patch
>
>
> MRv2 Job Counter has not been comma format, this is hard to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6255) MRv2 Job Counter display format

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319613#comment-14319613
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-6255:
---

+1, committing this shortly.

> MRv2 Job Counter display format
> ---
>
> Key: MAPREDUCE-6255
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6255
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Ryu Kobayashi
>Priority: Minor
> Attachments: YARN-3186.patch
>
>
> MRv2 Job Counter has not been comma format, this is hard to see.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6259) -1 submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause submit time is not updated in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6259:
-
Description: 
-1 submit time cause IllegalArgumentException when parse the Job history file 
name and JOB_INIT_FAILED cause submit time is not updated in JobIndexInfo.
We found the following job history file name which cause 
IllegalArgumentException when parse the job status in the job history file name.
{code}
job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
{code}

when IOException happened in JobImpl#setup, the Job submit time in 
JobHistoryEventHandler#MetaInfo#JobIndexInfo will not be changed and the Job 
submit time will be its [initial value 
-1|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java#L1185].
{code}
  this.jobIndexInfo =
  new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
   queueName);
{code}

The following is the sequence to get -1 submit time:
1. 
a job is created at MRAppMaster#serviceStart and  the new job is at state 
JobStateInternal.NEW after created
{code}
job = createJob(getConfig(), forcedState, shutDownMessage);
{code}

2.
JobEventType.JOB_INIT is sent to JobImpl from MRAppMaster#serviceStart
{code}
  JobEvent initJobEvent = new JobEvent(job.getID(), JobEventType.JOB_INIT);
  // Send init to the job (this does NOT trigger job execution)
  // This is a synchronous call, not an event through dispatcher. We want
  // job-init to be done completely here.
  jobEventDispatcher.handle(initJobEvent);
{code}

3.
after JobImpl received JobEventType.JOB_INIT, it call InitTransition#transition
{code}
  .addTransition
  (JobStateInternal.NEW,
  EnumSet.of(JobStateInternal.INITED, JobStateInternal.NEW),
  JobEventType.JOB_INIT,
  new InitTransition())
{code}

4.
then the exception happen from setup(job) in InitTransition#transition before 
JobSubmittedEvent is handled.
JobSubmittedEvent will update the job submit time. Due to the exception, the 
submit time is still the initial value -1.
This is the code InitTransition#transition
{code}
public JobStateInternal transition(JobImpl job, JobEvent event) {
  job.metrics.submittedJob(job);
  job.metrics.preparingJob(job);
  if (job.newApiCommitter) {
job.jobContext = new JobContextImpl(job.conf, job.oldJobId);
  } else {
job.jobContext = new org.apache.hadoop.mapred.JobContextImpl(job.conf, 
job.oldJobId);
  }
  try {
setup(job);
job.fs = job.getFileSystem(job.conf);
//log to job history
JobSubmittedEvent jse = new JobSubmittedEvent(job.oldJobId,
  job.conf.get(MRJobConfig.JOB_NAME, "test"), 
job.conf.get(MRJobConfig.USER_NAME, "mapred"),
job.appSubmitTime,
job.remoteJobConfFile.toString(),
job.jobACLs, job.queueName,
job.conf.get(MRJobConfig.WORKFLOW_ID, ""),
job.conf.get(MRJobConfig.WORKFLOW_NAME, ""),
job.conf.get(MRJobConfig.WORKFLOW_NODE_NAME, ""),
getWorkflowAdjacencies(job.conf),
job.conf.get(MRJobConfig.WORKFLOW_TAGS, ""));
job.eventHandler.handle(new JobHistoryEvent(job.jobId, jse));
//TODO JH Verify jobACLs, UserName via UGI?

TaskSplitMetaInfo[] taskSplitMetaInfo = createSplits(job, job.jobId);
job.numMapTasks = taskSplitMetaInfo.length;
job.numReduceTasks = job.conf.getInt(MRJobConfig.NUM_REDUCES, 0);

if (job.numMapTasks == 0 && job.numReduceTasks == 0) {
  job.addDiagnostic("No of maps and reduces are 0 " + job.jobId);
} else if (job.numMapTasks == 0) {
  job.reduceWeight = 0.9f;
} else if (job.numReduceTasks == 0) {
  job.mapWeight = 0.9f;
} else {
  job.mapWeight = job.reduceWeight = 0.45f;
}

checkTaskLimits();

long inputLength = 0;
for (int i = 0; i < job.numMapTasks; ++i) {
  inputLength += taskSplitMetaInfo[i].getInputDataLength();
}

job.makeUberDecision(inputLength);

job.taskAttemptCompletionEvents =
new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);
job.mapAttemptCompletionEvents =
new ArrayList(job.numMapTasks + 10);
job.taskCompletionIdxToMapCompletionIdx = new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);

job.allowedMapFailuresPercent =
job.conf.getInt(MRJobConfig.MAP_FAILURES_MAX_PERCENT, 0);
job.allowedReduceFailuresPercent =
job.conf.getInt(MRJobConfig.REDUCE_FAILUR

[jira] [Updated] (MAPREDUCE-6259) -1 submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause -1 submit time in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated MAPREDUCE-6259:
-
Description: 
-1 submit time cause IllegalArgumentException when parse the Job history file 
name and JOB_INIT_FAILED cause -1 submit time in JobIndexInfo.
We found the following job history file name which cause 
IllegalArgumentException when parse the job status in the job history file name.
{code}
job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
{code}

when IOException happened in JobImpl#setup, the Job submit time in 
JobHistoryEventHandler#MetaInfo#JobIndexInfo will not be changed and the Job 
submit time will be its [initial value 
-1|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java#L1185].
{code}
  this.jobIndexInfo =
  new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
   queueName);
{code}

The following is the sequence to get -1 submit time:
1. 
a job is created at MRAppMaster#serviceStart and  the new job is at state 
JobStateInternal.NEW after created
{code}
job = createJob(getConfig(), forcedState, shutDownMessage);
{code}

2.
JobEventType.JOB_INIT is sent to JobImpl from MRAppMaster#serviceStart
{code}
  JobEvent initJobEvent = new JobEvent(job.getID(), JobEventType.JOB_INIT);
  // Send init to the job (this does NOT trigger job execution)
  // This is a synchronous call, not an event through dispatcher. We want
  // job-init to be done completely here.
  jobEventDispatcher.handle(initJobEvent);
{code}

3.
after JobImpl received JobEventType.JOB_INIT, it call InitTransition#transition
{code}
  .addTransition
  (JobStateInternal.NEW,
  EnumSet.of(JobStateInternal.INITED, JobStateInternal.NEW),
  JobEventType.JOB_INIT,
  new InitTransition())
{code}

4.
then the exception happen from setup(job) in InitTransition#transition before 
JobSubmittedEvent is handled.
JobSubmittedEvent will update the job submit time. Due to the exception, the 
submit time is still the initial value -1.
This is the code InitTransition#transition
{code}
public JobStateInternal transition(JobImpl job, JobEvent event) {
  job.metrics.submittedJob(job);
  job.metrics.preparingJob(job);
  if (job.newApiCommitter) {
job.jobContext = new JobContextImpl(job.conf, job.oldJobId);
  } else {
job.jobContext = new org.apache.hadoop.mapred.JobContextImpl(job.conf, 
job.oldJobId);
  }
  try {
setup(job);
job.fs = job.getFileSystem(job.conf);
//log to job history
JobSubmittedEvent jse = new JobSubmittedEvent(job.oldJobId,
  job.conf.get(MRJobConfig.JOB_NAME, "test"), 
job.conf.get(MRJobConfig.USER_NAME, "mapred"),
job.appSubmitTime,
job.remoteJobConfFile.toString(),
job.jobACLs, job.queueName,
job.conf.get(MRJobConfig.WORKFLOW_ID, ""),
job.conf.get(MRJobConfig.WORKFLOW_NAME, ""),
job.conf.get(MRJobConfig.WORKFLOW_NODE_NAME, ""),
getWorkflowAdjacencies(job.conf),
job.conf.get(MRJobConfig.WORKFLOW_TAGS, ""));
job.eventHandler.handle(new JobHistoryEvent(job.jobId, jse));
//TODO JH Verify jobACLs, UserName via UGI?

TaskSplitMetaInfo[] taskSplitMetaInfo = createSplits(job, job.jobId);
job.numMapTasks = taskSplitMetaInfo.length;
job.numReduceTasks = job.conf.getInt(MRJobConfig.NUM_REDUCES, 0);

if (job.numMapTasks == 0 && job.numReduceTasks == 0) {
  job.addDiagnostic("No of maps and reduces are 0 " + job.jobId);
} else if (job.numMapTasks == 0) {
  job.reduceWeight = 0.9f;
} else if (job.numReduceTasks == 0) {
  job.mapWeight = 0.9f;
} else {
  job.mapWeight = job.reduceWeight = 0.45f;
}

checkTaskLimits();

long inputLength = 0;
for (int i = 0; i < job.numMapTasks; ++i) {
  inputLength += taskSplitMetaInfo[i].getInputDataLength();
}

job.makeUberDecision(inputLength);

job.taskAttemptCompletionEvents =
new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);
job.mapAttemptCompletionEvents =
new ArrayList(job.numMapTasks + 10);
job.taskCompletionIdxToMapCompletionIdx = new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);

job.allowedMapFailuresPercent =
job.conf.getInt(MRJobConfig.MAP_FAILURES_MAX_PERCENT, 0);
job.allowedReduceFailuresPercent =
job.conf.getInt(MRJobConfig.REDUCE_FAILURES_MAXPERCEN

[jira] [Commented] (MAPREDUCE-6228) Add truncate operation to SLive

2015-02-12 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319599#comment-14319599
 ] 

Konstantin Shvachko commented on MAPREDUCE-6228:


TestJobConf is failing due to MAPREDUCE-6223.
+1 - pending cluster test confirmation.

> Add truncate operation to SLive
> ---
>
> Key: MAPREDUCE-6228
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6228
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: benchmarks, test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: MAPREDUCE-6228.patch, MAPREDUCE-6228.patch, 
> MAPREDUCE-6228.patch
>
>
> Add truncate into the mix of operations for SLive test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6228) Add truncate operation to SLive

2015-02-12 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319557#comment-14319557
 ] 

Plamen Jeliazkov commented on MAPREDUCE-6228:
-

Test failure appears to be unrelated to my changes.

I saw that it also failed for build #5190 on Jenkins: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5190/, a build previous 
to my latest, which was #5192.

> Add truncate operation to SLive
> ---
>
> Key: MAPREDUCE-6228
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6228
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: benchmarks, test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: MAPREDUCE-6228.patch, MAPREDUCE-6228.patch, 
> MAPREDUCE-6228.patch
>
>
> Add truncate into the mix of operations for SLive test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6228) Add truncate operation to SLive

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319524#comment-14319524
 ] 

Hadoop QA commented on MAPREDUCE-6228:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698588/MAPREDUCE-6228.patch
  against trunk revision 99f6bd4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

  org.apache.hadoop.conf.TestJobConf

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5192//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5192//console

This message is automatically generated.

> Add truncate operation to SLive
> ---
>
> Key: MAPREDUCE-6228
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6228
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: benchmarks, test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: MAPREDUCE-6228.patch, MAPREDUCE-6228.patch, 
> MAPREDUCE-6228.patch
>
>
> Add truncate into the mix of operations for SLive test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6259) -1 submit time cause IllegalArgumentException when parse the Job history file name and JOB_INIT_FAILED cause -1 submit time in JobIndexInfo.

2015-02-12 Thread zhihai xu (JIRA)
zhihai xu created MAPREDUCE-6259:


 Summary: -1 submit time cause IllegalArgumentException when parse 
the Job history file name and JOB_INIT_FAILED cause -1 submit time in 
JobIndexInfo.
 Key: MAPREDUCE-6259
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6259
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver
Reporter: zhihai xu
Assignee: zhihai xu


-1 submit time cause IllegalArgumentException when parse the Job history file 
name and JOB_INIT_FAILED cause -1 submit time in JobIndexInfo.
We found the following job history file name which cause 
IllegalArgumentException when parse the job status in the job history file name.

when IOException happened in JobImpl#setup, the Job submit time in 
JobHistoryEventHandler#MetaInfo#JobIndexInfo will not be changed and the Job 
submit time will be its [initial value 
-1|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java#L1185].
{code}
  this.jobIndexInfo =
  new JobIndexInfo(-1, -1, user, jobName, jobId, -1, -1, null,
   queueName);
{code}

The following is the sequence to get -1 submit time:
1. 
a job is created at MRAppMaster#serviceStart and  the new job is at state 
JobStateInternal.NEW after created
{code}
job = createJob(getConfig(), forcedState, shutDownMessage);
{code}

2.
JobEventType.JOB_INIT is sent to JobImpl from MRAppMaster#serviceStart
{code}
  JobEvent initJobEvent = new JobEvent(job.getID(), JobEventType.JOB_INIT);
  // Send init to the job (this does NOT trigger job execution)
  // This is a synchronous call, not an event through dispatcher. We want
  // job-init to be done completely here.
  jobEventDispatcher.handle(initJobEvent);
{code}

3.
after JobImpl received JobEventType.JOB_INIT, it call InitTransition#transition
{code}
  .addTransition
  (JobStateInternal.NEW,
  EnumSet.of(JobStateInternal.INITED, JobStateInternal.NEW),
  JobEventType.JOB_INIT,
  new InitTransition())
{code}

4.
then the exception happen from setup(job) in InitTransition#transition before 
JobSubmittedEvent is handled.
JobSubmittedEvent will update the job submit time. Due to the exception, the 
submit time is still the initial value -1.
This is the code InitTransition#transition
{code}
public JobStateInternal transition(JobImpl job, JobEvent event) {
  job.metrics.submittedJob(job);
  job.metrics.preparingJob(job);
  if (job.newApiCommitter) {
job.jobContext = new JobContextImpl(job.conf, job.oldJobId);
  } else {
job.jobContext = new org.apache.hadoop.mapred.JobContextImpl(job.conf, 
job.oldJobId);
  }
  try {
setup(job);
job.fs = job.getFileSystem(job.conf);
//log to job history
JobSubmittedEvent jse = new JobSubmittedEvent(job.oldJobId,
  job.conf.get(MRJobConfig.JOB_NAME, "test"), 
job.conf.get(MRJobConfig.USER_NAME, "mapred"),
job.appSubmitTime,
job.remoteJobConfFile.toString(),
job.jobACLs, job.queueName,
job.conf.get(MRJobConfig.WORKFLOW_ID, ""),
job.conf.get(MRJobConfig.WORKFLOW_NAME, ""),
job.conf.get(MRJobConfig.WORKFLOW_NODE_NAME, ""),
getWorkflowAdjacencies(job.conf),
job.conf.get(MRJobConfig.WORKFLOW_TAGS, ""));
job.eventHandler.handle(new JobHistoryEvent(job.jobId, jse));
//TODO JH Verify jobACLs, UserName via UGI?

TaskSplitMetaInfo[] taskSplitMetaInfo = createSplits(job, job.jobId);
job.numMapTasks = taskSplitMetaInfo.length;
job.numReduceTasks = job.conf.getInt(MRJobConfig.NUM_REDUCES, 0);

if (job.numMapTasks == 0 && job.numReduceTasks == 0) {
  job.addDiagnostic("No of maps and reduces are 0 " + job.jobId);
} else if (job.numMapTasks == 0) {
  job.reduceWeight = 0.9f;
} else if (job.numReduceTasks == 0) {
  job.mapWeight = 0.9f;
} else {
  job.mapWeight = job.reduceWeight = 0.45f;
}

checkTaskLimits();

long inputLength = 0;
for (int i = 0; i < job.numMapTasks; ++i) {
  inputLength += taskSplitMetaInfo[i].getInputDataLength();
}

job.makeUberDecision(inputLength);

job.taskAttemptCompletionEvents =
new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);
job.mapAttemptCompletionEvents =
new ArrayList(job.numMapTasks + 10);
job.taskCompletionIdxToMapCompletionIdx = new ArrayList(
job.numMapTasks + job.numReduceTasks + 10);

job.allowedMapFailuresPercent =
job.conf.getInt(MRJob

[jira] [Commented] (MAPREDUCE-6258) add support to back up JHS files from application master

2015-02-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319387#comment-14319387
 ] 

Hadoop QA commented on MAPREDUCE-6258:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12698571/MAPREDUCE-6258.patch
  against trunk revision 99f6bd4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5191//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5191//console

This message is automatically generated.

> add support to back up JHS files from application master
> 
>
> Key: MAPREDUCE-6258
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6258
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: applicationmaster
>Affects Versions: 2.4.1
>Reporter: Jian Fang
> Attachments: MAPREDUCE-6258.patch
>
>
> In hadoop two, job history files are stored on HDFS with a default retention 
> period of one week. In a cloud environment, these HDFS files are actually 
> stored on the disks of ephemeral instances that could go away once the 
> instances are terminated. Users may want to back up the job history files for 
> issue investigation and performance analysis before and after the cluster is 
> terminated. 
> A centralized backup mechanism could have a scalability issue for big and 
> busy Hadoop clusters where there are probably tens of thousands of jobs every 
> day. As a result, it is preferred to have a distributed way to back up the 
> job history files in this case. To achieve this goal, we could add a new 
> feature to back up the job history files in Application master. More 
> specifically, we could copy the job history files to a backup path when they 
> are moved from the temporary staging directory to the intermediate_done path 
> in application master. Since application masters could run on any slave nodes 
> on a Hadoop cluster, we could achieve a better scalability by backing up the 
> job history files in a distributed fashion.
> Please be aware, the backup path should be managed by the Hadoop users based 
> on their needs. For example, some Hadoop users may copy the job history files 
> to a cloud storage directly and keep them there forever. While some other 
> users may want to store the job history files on local disks and clean them 
> up from time to time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5951) Add support for the YARN Shared Cache

2015-02-12 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319375#comment-14319375
 ] 

Karthik Kambatla commented on MAPREDUCE-5951:
-

Sorry for the delay in getting to this. Getting a continuous chunk of time to 
look at this somewhat large patch was hard. 

Here are my first round of comments - a combination of high-level and detailed 
comments. Let us see if we can get some of this in through other JIRAs first, 
to allow for a more thorough review.
# DistributedCache changes aren’t central to what this JIRA is trying to 
address. Could we leave them out and address in another JIRA? 
## This has nothing to do with this patch, but it would be nice to make the 
code around setting CLASSPATH_FILES a little more readable. Could we define 
another String prefix to hold “” or classpath, based on whether classpath is 
null. 
# Job
## The new APIs should all be @Unstable
## Let us make the javadoc for the new APIs a little more formal - we don’t 
need to mention SCMClientProtocol.use, or that the APIs are intended for user 
use. Even for the return value, I would go with something like “If shared cache 
is enabled and the resource added successfully, return true. Otherwise, return 
false.”
## How about renaming the methods to addFileToSharedCache, 
addArchiveToSharedCache, addFileToSharedCacheAndClasspath? 
## Make both new methods private static instead of static private.
# JobID changes might not be required. Use ConverterUtils#toApplicationId? 
# JobImpl
## cleanupSharedCacheResources - nit: I would check for (checksums == null || 
checksums.length == 0) and return to save on indentations. 80 chars is already 
too small.
## cleanupSharedCacheUploadPolicies - javadoc should use block comments. Well, 
may be a nit.
# JobSubmitter
## Can we do the code moving from JobSumitter to FileUploader (may be, we need 
a more descriptive name) to another JIRA and look at that first if needed. 
Otherwise, it is hard to review the changes.
## May be, I am misreading the patch. Is this patch hardcoding MR job 
submission to always use SharedCache? If yes, we should definitely avoid that. 
# mapred-default.xml: We need a little more fool-proof config. The way the 
patch currently is, a typo will lead to unexpected behavior without any 
warnings.


> Add support for the YARN Shared Cache
> -
>
> Key: MAPREDUCE-5951
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5951
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: MAPREDUCE-5951-trunk-v1.patch, 
> MAPREDUCE-5951-trunk-v2.patch, MAPREDUCE-5951-trunk-v3.patch, 
> MAPREDUCE-5951-trunk-v4.patch, MAPREDUCE-5951-trunk-v5.patch, 
> MAPREDUCE-5951-trunk-v6.patch
>
>
> Implement the necessary changes so that the MapReduce application can 
> leverage the new YARN shared cache (i.e. YARN-1492).
> Specifically, allow per-job configuration so that MapReduce jobs can specify 
> which set of resources they would like to cache (i.e. jobjar, libjars, 
> archives, files).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6228) Add truncate operation to SLive

2015-02-12 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated MAPREDUCE-6228:

Attachment: MAPREDUCE-6228.patch

Attaching patch with Konstantin's points addressed.

# Done.
# Done.
# Done.
# Okay.
# I do not have access to a cluster at the moment but I will confirm once I do.

> Add truncate operation to SLive
> ---
>
> Key: MAPREDUCE-6228
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6228
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: benchmarks, test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: MAPREDUCE-6228.patch, MAPREDUCE-6228.patch, 
> MAPREDUCE-6228.patch
>
>
> Add truncate into the mix of operations for SLive test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6258) add support to back up JHS files from application master

2015-02-12 Thread Jian Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian Fang updated MAPREDUCE-6258:
-
Status: Patch Available  (was: Open)

> add support to back up JHS files from application master
> 
>
> Key: MAPREDUCE-6258
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6258
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: applicationmaster
>Affects Versions: 2.4.1
>Reporter: Jian Fang
> Attachments: MAPREDUCE-6258.patch
>
>
> In hadoop two, job history files are stored on HDFS with a default retention 
> period of one week. In a cloud environment, these HDFS files are actually 
> stored on the disks of ephemeral instances that could go away once the 
> instances are terminated. Users may want to back up the job history files for 
> issue investigation and performance analysis before and after the cluster is 
> terminated. 
> A centralized backup mechanism could have a scalability issue for big and 
> busy Hadoop clusters where there are probably tens of thousands of jobs every 
> day. As a result, it is preferred to have a distributed way to back up the 
> job history files in this case. To achieve this goal, we could add a new 
> feature to back up the job history files in Application master. More 
> specifically, we could copy the job history files to a backup path when they 
> are moved from the temporary staging directory to the intermediate_done path 
> in application master. Since application masters could run on any slave nodes 
> on a Hadoop cluster, we could achieve a better scalability by backing up the 
> job history files in a distributed fashion.
> Please be aware, the backup path should be managed by the Hadoop users based 
> on their needs. For example, some Hadoop users may copy the job history files 
> to a cloud storage directly and keep them there forever. While some other 
> users may want to store the job history files on local disks and clean them 
> up from time to time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6258) add support to back up JHS files from application master

2015-02-12 Thread Jian Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian Fang updated MAPREDUCE-6258:
-
Attachment: MAPREDUCE-6258.patch

> add support to back up JHS files from application master
> 
>
> Key: MAPREDUCE-6258
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6258
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: applicationmaster
>Affects Versions: 2.4.1
>Reporter: Jian Fang
> Attachments: MAPREDUCE-6258.patch
>
>
> In hadoop two, job history files are stored on HDFS with a default retention 
> period of one week. In a cloud environment, these HDFS files are actually 
> stored on the disks of ephemeral instances that could go away once the 
> instances are terminated. Users may want to back up the job history files for 
> issue investigation and performance analysis before and after the cluster is 
> terminated. 
> A centralized backup mechanism could have a scalability issue for big and 
> busy Hadoop clusters where there are probably tens of thousands of jobs every 
> day. As a result, it is preferred to have a distributed way to back up the 
> job history files in this case. To achieve this goal, we could add a new 
> feature to back up the job history files in Application master. More 
> specifically, we could copy the job history files to a backup path when they 
> are moved from the temporary staging directory to the intermediate_done path 
> in application master. Since application masters could run on any slave nodes 
> on a Hadoop cluster, we could achieve a better scalability by backing up the 
> job history files in a distributed fashion.
> Please be aware, the backup path should be managed by the Hadoop users based 
> on their needs. For example, some Hadoop users may copy the job history files 
> to a cloud storage directly and keep them there forever. While some other 
> users may want to store the job history files on local disks and clean them 
> up from time to time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6258) add support to back up JHS files from application master

2015-02-12 Thread Jian Fang (JIRA)
Jian Fang created MAPREDUCE-6258:


 Summary: add support to back up JHS files from application master
 Key: MAPREDUCE-6258
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6258
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
  Components: applicationmaster
Affects Versions: 2.4.1
Reporter: Jian Fang


In hadoop two, job history files are stored on HDFS with a default retention 
period of one week. In a cloud environment, these HDFS files are actually 
stored on the disks of ephemeral instances that could go away once the 
instances are terminated. Users may want to back up the job history files for 
issue investigation and performance analysis before and after the cluster is 
terminated. 


A centralized backup mechanism could have a scalability issue for big and busy 
Hadoop clusters where there are probably tens of thousands of jobs every day. 
As a result, it is preferred to have a distributed way to back up the job 
history files in this case. To achieve this goal, we could add a new feature to 
back up the job history files in Application master. More specifically, we 
could copy the job history files to a backup path when they are moved from the 
temporary staging directory to the intermediate_done path in application 
master. Since application masters could run on any slave nodes on a Hadoop 
cluster, we could achieve a better scalability by backing up the job history 
files in a distributed fashion.

Please be aware, the backup path should be managed by the Hadoop users based on 
their needs. For example, some Hadoop users may copy the job history files to a 
cloud storage directly and keep them there forever. While some other users may 
want to store the job history files on local disks and clean them up from time 
to time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5269) Preemption of Reducer (and Shuffle) via checkpointing

2015-02-12 Thread Augusto Souza (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319166#comment-14319166
 ] 

Augusto Souza commented on MAPREDUCE-5269:
--

Does anyone know why the TestReduceTask is failing because of the definition of 
the IFile.Writer constructor even knowing I have changed the signature of this 
contructor in the patch I submitted? 

It is like jenkins is running this test with an older version of the IFile 
class than the one my patch propose.

Also, TestJobConf in my development machine is also breaking on trunk, do I 
have to look at this?

Thanks in advance for any help.

> Preemption of Reducer (and Shuffle) via checkpointing
> -
>
> Key: MAPREDUCE-5269
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5269
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: MAPREDUCE-5269.2.patch, MAPREDUCE-5269.3.patch, 
> MAPREDUCE-5269.4.patch, MAPREDUCE-5269.5.patch, MAPREDUCE-5269.6.patch, 
> MAPREDUCE-5269.7.patch, MAPREDUCE-5269.patch
>
>
> This patch tracks the changes in the task runtime (shuffle, reducer context, 
> etc.) that are required to implement checkpoint-based preemption of reducer 
> tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5269) Preemption of Reducer (and Shuffle) via checkpointing

2015-02-12 Thread Augusto Souza (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319167#comment-14319167
 ] 

Augusto Souza commented on MAPREDUCE-5269:
--

Does anyone know why the TestReduceTask is failing because of the definition of 
the IFile.Writer constructor even knowing I have changed the signature of this 
contructor in the patch I submitted? 

It is like jenkins is running this test with an older version of the IFile 
class than the one my patch propose.

Also, TestJobConf in my development machine is also breaking on trunk, do I 
have to look at this?

Thanks in advance for any help.

> Preemption of Reducer (and Shuffle) via checkpointing
> -
>
> Key: MAPREDUCE-5269
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5269
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: MAPREDUCE-5269.2.patch, MAPREDUCE-5269.3.patch, 
> MAPREDUCE-5269.4.patch, MAPREDUCE-5269.5.patch, MAPREDUCE-5269.6.patch, 
> MAPREDUCE-5269.7.patch, MAPREDUCE-5269.patch
>
>
> This patch tracks the changes in the task runtime (shuffle, reducer context, 
> etc.) that are required to implement checkpoint-based preemption of reducer 
> tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6257) Document encrypted spills

2015-02-12 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created MAPREDUCE-6257:
---

 Summary: Document encrypted spills
 Key: MAPREDUCE-6257
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6257
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: security
Reporter: Allen Wittenauer


Encrypted spills appear to be completely undocumented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6250) deprecate sbin/mr-jobhistory-daemon.sh

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319092#comment-14319092
 ] 

Hudson commented on MAPREDUCE-6250:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #7094 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7094/])
MAPREDUCE-6250. deprecate sbin/mr-jobhistory-daemon.sh (aw) (aw: rev 
6f5290b0309c0d06a7e05f64354ca0c1fb5a4676)
* hadoop-mapreduce-project/bin/mr-jobhistory-daemon.sh
* hadoop-mapreduce-project/CHANGES.txt


> deprecate sbin/mr-jobhistory-daemon.sh
> --
>
> Key: MAPREDUCE-6250
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6250
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: MAPREDUCE-6250-00.patch
>
>
> Functionality has been moved to bin/mapred.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6250) deprecate sbin/mr-jobhistory-daemon.sh

2015-02-12 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6250:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the review!

Committed to trunk.

> deprecate sbin/mr-jobhistory-daemon.sh
> --
>
> Key: MAPREDUCE-6250
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6250
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: MAPREDUCE-6250-00.patch
>
>
> Functionality has been moved to bin/mapred.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6252) JobHistoryServer should not fail when encountering a missing directory

2015-02-12 Thread Craig Welch (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318638#comment-14318638
 ] 

Craig Welch commented on MAPREDUCE-6252:


Not at all, will do.

> JobHistoryServer should not fail when encountering a missing directory
> --
>
> Key: MAPREDUCE-6252
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6252
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Affects Versions: 2.6.0
>Reporter: Craig Welch
>Assignee: Craig Welch
> Attachments: MAPREDUCE-6252.0.patch
>
>
> The JobHistoryServer maintains a cache of job serial number parts to dfs 
> paths which it uses when seeking a job it no longer has in its memory cache, 
> multiple directories for a given serial number differentiated by time stamp.  
> At present the jobhistory server will fail any time it attempts to find a job 
> in a directory which no longer exists based on that cache - even though the 
> job may well exist in a different directory for the serial number.  Typically 
> this is not an issue, but the history cleanup process removes the directory 
> from dfs before removing it from the cache which leaves a window of time 
> where a directory may be missing from dfs which is present in the cache, 
> resulting in failure.  For some dfs's it appears that the top level directory 
> may become unavailable some time before the full deletion of the tree 
> completes which extends what might otherwise be a brief period of failure to 
> a more extended period.  Further, this also places the service at the mercy 
> of outside processes which might remove those directories.  The proposal is 
> simply to make the server resistant to this state such that encountering this 
> missing directory is not fatal and the process will continue on to seek it 
> elsewhere.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6253) Update use of Iterator to Iterable

2015-02-12 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318609#comment-14318609
 ] 

Ray Chiang commented on MAPREDUCE-6253:
---

Great.  Thanks for reviewing and committing!

> Update use of Iterator to Iterable
> --
>
> Key: MAPREDUCE-6253
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6253
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-6253.001.patch
>
>
> Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6221) Stringifier is left unclosed in Chain#getChainElementConf()

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318590#comment-14318590
 ] 

Hudson commented on MAPREDUCE-6221:
---

FAILURE: Integrated in Hadoop-trunk-Commit #7090 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7090/])
MAPREDUCE-6221. Stringifier is left unclosed in Chain#getChainElementConf(). 
Contributed by Ted Yu. (ozawa: rev 9b0ba59b8284fae132535fbca5ce372d7a6c38c0)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/Chain.java


> Stringifier is left unclosed in Chain#getChainElementConf()
> ---
>
> Key: MAPREDUCE-6221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: mapreduce-6221-001.patch
>
>
> {code}
>   Stringifier stringifier = 
> new DefaultStringifier(jobConf, Configuration.class);
>   String confString = jobConf.get(confKey, null);
>   if (confString != null) {
> conf = stringifier.fromString(jobConf.get(confKey, null));
> {code}
> stringifier is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6221) Stringifier is left unclosed in Chain#getChainElementConf()

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318588#comment-14318588
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-6221:
---

Committed this to trunk and branch-2. Thanks [~ted_yu] for taking this JIRA!

> Stringifier is left unclosed in Chain#getChainElementConf()
> ---
>
> Key: MAPREDUCE-6221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: mapreduce-6221-001.patch
>
>
> {code}
>   Stringifier stringifier = 
> new DefaultStringifier(jobConf, Configuration.class);
>   String confString = jobConf.get(confKey, null);
>   if (confString != null) {
> conf = stringifier.fromString(jobConf.get(confKey, null));
> {code}
> stringifier is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6221) Stringifier is left unclosed in Chain#getChainElementConf()

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-6221:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Stringifier is left unclosed in Chain#getChainElementConf()
> ---
>
> Key: MAPREDUCE-6221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: mapreduce-6221-001.patch
>
>
> {code}
>   Stringifier stringifier = 
> new DefaultStringifier(jobConf, Configuration.class);
>   String confString = jobConf.get(confKey, null);
>   if (confString != null) {
> conf = stringifier.fromString(jobConf.get(confKey, null));
> {code}
> stringifier is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6221) Stringifier is left unclosed in Chain#getChainElementConf()

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-6221:
--
Affects Version/s: 2.6.0

> Stringifier is left unclosed in Chain#getChainElementConf()
> ---
>
> Key: MAPREDUCE-6221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: mapreduce-6221-001.patch
>
>
> {code}
>   Stringifier stringifier = 
> new DefaultStringifier(jobConf, Configuration.class);
>   String confString = jobConf.get(confKey, null);
>   if (confString != null) {
> conf = stringifier.fromString(jobConf.get(confKey, null));
> {code}
> stringifier is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6221) Stringifier is left unclosed in Chain#getChainElementConf()

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-6221:
--
Hadoop Flags: Reviewed

> Stringifier is left unclosed in Chain#getChainElementConf()
> ---
>
> Key: MAPREDUCE-6221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: mapreduce-6221-001.patch
>
>
> {code}
>   Stringifier stringifier = 
> new DefaultStringifier(jobConf, Configuration.class);
>   String confString = jobConf.get(confKey, null);
>   if (confString != null) {
> conf = stringifier.fromString(jobConf.get(confKey, null));
> {code}
> stringifier is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6221) Stringifier is left unclosed in Chain#getChainElementConf()

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-6221:
--
Fix Version/s: 2.7.0

> Stringifier is left unclosed in Chain#getChainElementConf()
> ---
>
> Key: MAPREDUCE-6221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: mapreduce-6221-001.patch
>
>
> {code}
>   Stringifier stringifier = 
> new DefaultStringifier(jobConf, Configuration.class);
>   String confString = jobConf.get(confKey, null);
>   if (confString != null) {
> conf = stringifier.fromString(jobConf.get(confKey, null));
> {code}
> stringifier is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6221) Stringifier is left unclosed in Chain#getChainElementConf()

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318583#comment-14318583
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-6221:
---

+1, committing this shortly.

> Stringifier is left unclosed in Chain#getChainElementConf()
> ---
>
> Key: MAPREDUCE-6221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: mapreduce-6221-001.patch
>
>
> {code}
>   Stringifier stringifier = 
> new DefaultStringifier(jobConf, Configuration.class);
>   String confString = jobConf.get(confKey, null);
>   if (confString != null) {
> conf = stringifier.fromString(jobConf.get(confKey, null));
> {code}
> stringifier is not closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6253) Update use of Iterator to Iterable

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318372#comment-14318372
 ] 

Hudson commented on MAPREDUCE-6253:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2053 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2053/])
MAPREDUCE-6253. Update use of Iterator to Iterable. Contributed by Ray 
(devaraj: rev 76e309ead01f02b32335330cd920536f907fb71f)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java


> Update use of Iterator to Iterable
> --
>
> Key: MAPREDUCE-6253
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6253
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-6253.001.patch
>
>
> Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318373#comment-14318373
 ] 

Hudson commented on MAPREDUCE-5335:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2053 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2053/])
MAPREDUCE-5335. Rename Job Tracker terminology in ShuffleSchedulerImpl. 
Contributed by Devaraj K. (ozawa: rev b42d09eb62bd1725d70da59f1a6fdac83cea82d1)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java


> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4431) mapred command should print the reason on killing already completed jobs

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318368#comment-14318368
 ] 

Hudson commented on MAPREDUCE-4431:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2053 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2053/])
MAPREDUCE-4431. mapred command should print the reason on killing already 
completed jobs. Contributed by Devaraj K. (ozawa: rev 
ac8d52bf50bba1a29489ee75fd90717d8a2b0cc9)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/tools/TestCLI.java
* hadoop-mapreduce-project/CHANGES.txt


> mapred command should print the reason on killing already completed jobs
> 
>
> Key: MAPREDUCE-4431
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4431
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-4431-1.patch, MAPREDUCE-4431-2.patch, 
> MAPREDUCE-4431.patch
>
>
> If we try to kill the already completed job by the following command it gives 
> ambiguous message as "Killed job "
> ./mapred job -kill 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6253) Update use of Iterator to Iterable

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318322#comment-14318322
 ] 

Hudson commented on MAPREDUCE-6253:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #103 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/103/])
MAPREDUCE-6253. Update use of Iterator to Iterable. Contributed by Ray 
(devaraj: rev 76e309ead01f02b32335330cd920536f907fb71f)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java


> Update use of Iterator to Iterable
> --
>
> Key: MAPREDUCE-6253
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6253
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-6253.001.patch
>
>
> Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4431) mapred command should print the reason on killing already completed jobs

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318318#comment-14318318
 ] 

Hudson commented on MAPREDUCE-4431:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #103 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/103/])
MAPREDUCE-4431. mapred command should print the reason on killing already 
completed jobs. Contributed by Devaraj K. (ozawa: rev 
ac8d52bf50bba1a29489ee75fd90717d8a2b0cc9)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/tools/TestCLI.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java


> mapred command should print the reason on killing already completed jobs
> 
>
> Key: MAPREDUCE-4431
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4431
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-4431-1.patch, MAPREDUCE-4431-2.patch, 
> MAPREDUCE-4431.patch
>
>
> If we try to kill the already completed job by the following command it gives 
> ambiguous message as "Killed job "
> ./mapred job -kill 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318323#comment-14318323
 ] 

Hudson commented on MAPREDUCE-5335:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #103 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/103/])
MAPREDUCE-5335. Rename Job Tracker terminology in ShuffleSchedulerImpl. 
Contributed by Devaraj K. (ozawa: rev b42d09eb62bd1725d70da59f1a6fdac83cea82d1)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java


> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318241#comment-14318241
 ] 

Hudson commented on MAPREDUCE-5335:
---

FAILURE: Integrated in Hadoop-Hdfs-trunk #2034 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2034/])
MAPREDUCE-5335. Rename Job Tracker terminology in ShuffleSchedulerImpl. 
Contributed by Devaraj K. (ozawa: rev b42d09eb62bd1725d70da59f1a6fdac83cea82d1)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* hadoop-mapreduce-project/CHANGES.txt


> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4431) mapred command should print the reason on killing already completed jobs

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318236#comment-14318236
 ] 

Hudson commented on MAPREDUCE-4431:
---

FAILURE: Integrated in Hadoop-Hdfs-trunk #2034 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2034/])
MAPREDUCE-4431. mapred command should print the reason on killing already 
completed jobs. Contributed by Devaraj K. (ozawa: rev 
ac8d52bf50bba1a29489ee75fd90717d8a2b0cc9)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/tools/TestCLI.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java


> mapred command should print the reason on killing already completed jobs
> 
>
> Key: MAPREDUCE-4431
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4431
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-4431-1.patch, MAPREDUCE-4431-2.patch, 
> MAPREDUCE-4431.patch
>
>
> If we try to kill the already completed job by the following command it gives 
> ambiguous message as "Killed job "
> ./mapred job -kill 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6253) Update use of Iterator to Iterable

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318240#comment-14318240
 ] 

Hudson commented on MAPREDUCE-6253:
---

FAILURE: Integrated in Hadoop-Hdfs-trunk #2034 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2034/])
MAPREDUCE-6253. Update use of Iterator to Iterable. Contributed by Ray 
(devaraj: rev 76e309ead01f02b32335330cd920536f907fb71f)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
* hadoop-mapreduce-project/CHANGES.txt


> Update use of Iterator to Iterable
> --
>
> Key: MAPREDUCE-6253
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6253
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-6253.001.patch
>
>
> Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6256) Removed unused private methods in o.a.h.mapreduce.Job.java

2015-02-12 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated MAPREDUCE-6256:
-
Labels: newbie  (was: )

> Removed unused private methods in o.a.h.mapreduce.Job.java
> --
>
> Key: MAPREDUCE-6256
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6256
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Devaraj K
>Priority: Minor
>  Labels: newbie
>
> These below methods are not used any where in the code and these can be 
> removed.
> {code:xml}
>   private void setStatus(JobStatus status)
>   private boolean shouldDownloadProfile()
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6256) Removed unused private methods in o.a.h.mapreduce.Job.java

2015-02-12 Thread Devaraj K (JIRA)
Devaraj K created MAPREDUCE-6256:


 Summary: Removed unused private methods in o.a.h.mapreduce.Job.java
 Key: MAPREDUCE-6256
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6256
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Devaraj K
Priority: Minor


These below methods are not used any where in the code and these can be removed.
{code:xml}
  private void setStatus(JobStatus status)
  private boolean shouldDownloadProfile()
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-4294) Submitting job by enabling task profiling gives IOException

2015-02-12 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-4294.
--
Resolution: Fixed

It was fixed some time ago.

> Submitting job by enabling task profiling gives IOException
> ---
>
> Key: MAPREDUCE-4294
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4294
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>
> {noformat}
> java.io.IOException: Server returned HTTP response code: 400 for URL: 
> http://HOST-10-18-52-224:8080/tasklog?plaintext=true&attemptid=attempt_1338370885386_0006_m_00_0&filter=profile
> at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1290)
> at org.apache.hadoop.mapreduce.Job.downloadProfile(Job.java:1421)
> at org.apache.hadoop.mapreduce.Job.printTaskEvents(Job.java:1376)
> at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1310)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1247)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
> at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
> at 
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6253) Update use of Iterator to Iterable

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318107#comment-14318107
 ] 

Hudson commented on MAPREDUCE-6253:
---

SUCCESS: Integrated in Hadoop-Yarn-trunk #836 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/836/])
MAPREDUCE-6253. Update use of Iterator to Iterable. Contributed by Ray 
(devaraj: rev 76e309ead01f02b32335330cd920536f907fb71f)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java


> Update use of Iterator to Iterable
> --
>
> Key: MAPREDUCE-6253
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6253
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-6253.001.patch
>
>
> Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6253) Update use of Iterator to Iterable

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318060#comment-14318060
 ] 

Hudson commented on MAPREDUCE-6253:
---

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #102 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/102/])
MAPREDUCE-6253. Update use of Iterator to Iterable. Contributed by Ray 
(devaraj: rev 76e309ead01f02b32335330cd920536f907fb71f)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java


> Update use of Iterator to Iterable
> --
>
> Key: MAPREDUCE-6253
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6253
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-6253.001.patch
>
>
> Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318037#comment-14318037
 ] 

Hudson commented on MAPREDUCE-5335:
---

FAILURE: Integrated in Hadoop-trunk-Commit #7086 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7086/])
MAPREDUCE-5335. Rename Job Tracker terminology in ShuffleSchedulerImpl. 
Contributed by Devaraj K. (ozawa: rev b42d09eb62bd1725d70da59f1a6fdac83cea82d1)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/ShuffleSchedulerImpl.java
* hadoop-mapreduce-project/CHANGES.txt


> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4431) mapred command should print the reason on killing already completed jobs

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318036#comment-14318036
 ] 

Hudson commented on MAPREDUCE-4431:
---

FAILURE: Integrated in Hadoop-trunk-Commit #7086 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7086/])
MAPREDUCE-4431. mapred command should print the reason on killing already 
completed jobs. Contributed by Devaraj K. (ozawa: rev 
ac8d52bf50bba1a29489ee75fd90717d8a2b0cc9)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/tools/TestCLI.java
* hadoop-mapreduce-project/CHANGES.txt


> mapred command should print the reason on killing already completed jobs
> 
>
> Key: MAPREDUCE-4431
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4431
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-4431-1.patch, MAPREDUCE-4431-2.patch, 
> MAPREDUCE-4431.patch
>
>
> If we try to kill the already completed job by the following command it gives 
> ambiguous message as "Killed job "
> ./mapred job -kill 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4431) mapred command should print the reason on killing already completed jobs

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-4431:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> mapred command should print the reason on killing already completed jobs
> 
>
> Key: MAPREDUCE-4431
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4431
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-4431-1.patch, MAPREDUCE-4431-2.patch, 
> MAPREDUCE-4431.patch
>
>
> If we try to kill the already completed job by the following command it gives 
> ambiguous message as "Killed job "
> ./mapred job -kill 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4431) mapred command should print the reason on killing already completed jobs

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318029#comment-14318029
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-4431:
---

Committed this to trunk and branch-2. Thank [~devaraj.k] for your contribution, 
thanks [~nishan] for your reporting, and thanks [~qwertymaniac] for your review.


> mapred command should print the reason on killing already completed jobs
> 
>
> Key: MAPREDUCE-4431
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4431
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-4431-1.patch, MAPREDUCE-4431-2.patch, 
> MAPREDUCE-4431.patch
>
>
> If we try to kill the already completed job by the following command it gives 
> ambiguous message as "Killed job "
> ./mapred job -kill 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4431) mapred command should print the reason on killing already completed jobs

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-4431:
--
Issue Type: Improvement  (was: Bug)

> mapred command should print the reason on killing already completed jobs
> 
>
> Key: MAPREDUCE-4431
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4431
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-4431-1.patch, MAPREDUCE-4431-2.patch, 
> MAPREDUCE-4431.patch
>
>
> If we try to kill the already completed job by the following command it gives 
> ambiguous message as "Killed job "
> ./mapred job -kill 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4431) mapred command should print the reason on killing already completed jobs

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-4431:
--
Fix Version/s: 2.7.0

> mapred command should print the reason on killing already completed jobs
> 
>
> Key: MAPREDUCE-4431
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4431
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-4431-1.patch, MAPREDUCE-4431-2.patch, 
> MAPREDUCE-4431.patch
>
>
> If we try to kill the already completed job by the following command it gives 
> ambiguous message as "Killed job "
> ./mapred job -kill 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4431) mapred command should print the reason on killing already completed jobs

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-4431:
--
Summary: mapred command should print the reason on killing already 
completed jobs  (was: killing already completed job gives ambiguous message as 
"Killed job ")

> mapred command should print the reason on killing already completed jobs
> 
>
> Key: MAPREDUCE-4431
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4431
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>Priority: Minor
> Attachments: MAPREDUCE-4431-1.patch, MAPREDUCE-4431-2.patch, 
> MAPREDUCE-4431.patch
>
>
> If we try to kill the already completed job by the following command it gives 
> ambiguous message as "Killed job "
> ./mapred job -kill 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4431) killing already completed job gives ambiguous message as "Killed job "

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318015#comment-14318015
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-4431:
---

+1, committing this shortly.

> killing already completed job gives ambiguous message as "Killed job "
> --
>
> Key: MAPREDUCE-4431
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4431
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: Nishan Shetty
>Assignee: Devaraj K
>Priority: Minor
> Attachments: MAPREDUCE-4431-1.patch, MAPREDUCE-4431-2.patch, 
> MAPREDUCE-4431.patch
>
>
> If we try to kill the already completed job by the following command it gives 
> ambiguous message as "Killed job "
> ./mapred job -kill 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14318003#comment-14318003
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5335:
---

Committed this into trunk and branch-2. Thanks [~devaraj.k] for your 
contribution!

> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-5335:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317999#comment-14317999
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5335:
---

Committing this shortly.

> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-5335:
--
Fix Version/s: 2.7.0

> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-5335:
--
Issue Type: Improvement  (was: Bug)

> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated MAPREDUCE-5335:
--
Hadoop Flags: Reviewed

> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5335) Rename Job Tracker terminology in ShuffleSchedulerImpl

2015-02-12 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317996#comment-14317996
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5335:
---

+1, findbugs report is not related to the patch. 

> Rename Job Tracker terminology in ShuffleSchedulerImpl
> --
>
> Key: MAPREDUCE-5335
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5335
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Reporter: Devaraj K
>Assignee: Devaraj K
> Attachments: MAPREDUCE-5335-1.patch, MAPREDUCE-5335.patch
>
>
> {code:xml}
> 2013-06-17 17:27:30,134 INFO [fetcher#2] 
> org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler: Reporting fetch 
> failure for attempt_1371467533091_0005_m_10_0 to jobtracker.
> {code}
> {code:title=ShuffleSchedulerImpl.java|borderStyle=solid}
>   // Notify the JobTracker
>   // after every read error, if 'reportReadErrorImmediately' is true or
>   // after every 'maxFetchFailuresBeforeReporting' failures
>   private void checkAndInformJobTracker(
>   int failures, TaskAttemptID mapId, boolean readError,
>   boolean connectExcpt) {
> if (connectExcpt || (reportReadErrorImmediately && readError)
> || ((failures % maxFetchFailuresBeforeReporting) == 0)) {
>   LOG.info("Reporting fetch failure for " + mapId + " to jobtracker.");
>   status.addFetchFailedMap((org.apache.hadoop.mapred.TaskAttemptID) 
> mapId);
> }
>   }
>  {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-2647) Memory sharing across all the Tasks in the Task Tracker to improve the job performance

2015-02-12 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K resolved MAPREDUCE-2647.
--
Resolution: Won't Fix

Closing it as Won't fix as there is no active feature development happening in 
mrv1.

> Memory sharing across all the Tasks in the Task Tracker to improve the job 
> performance
> --
>
> Key: MAPREDUCE-2647
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2647
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: tasktracker
>Reporter: Devaraj K
>Assignee: Devaraj K
>
>   If all the tasks (maps/reduces) are using (working with) the same 
> additional data to execute the map/reduce task, each task should load the 
> data into memory individually and read the data. It is the additional effort 
> for all the tasks to do the same job. Instead of loading the data by each 
> task, data can be loaded into main memory and it can be used to execute all 
> the tasks.
> h5.Proposed Solution:
> 1. Provide a mechanism to load the data into shared memory and to read that 
> data from main memory.
> 2. We can provide a java API, which internally uses the native implementation 
> to read the data from the memory. All the maps/reducers can this API for 
> reading the data from the main memory. 
> h5.Example: 
>   Suppose in a map task, ip address is a key and it needs to get location 
> of the ip address from a local file. In this case each map task should load 
> the file into main memory and read from it and close it. It takes some time 
> to open, read from the file and process every time. Instead of this, we can 
> load the file in the task tracker memory and each task can read from the 
> memory directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-2350) LocalJobRunner uses "mapred.output.committer.class" configuration property to retrive the OutputCommitter (regardless of whether the old API is used or the new API)

2015-02-12 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated MAPREDUCE-2350:
-
   Resolution: Duplicate
Fix Version/s: (was: 0.24.0)
   Status: Resolved  (was: Patch Available)

It was fixed with the jira MAPREDUCE-3563.

> LocalJobRunner uses "mapred.output.committer.class" configuration property to 
> retrive the OutputCommitter (regardless of whether the old API is used or the 
> new API)
> 
>
> Key: MAPREDUCE-2350
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2350
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Ahmed Radwan
>Assignee: Devaraj K
> Attachments: MAPREDUCE-2350-1.patch, MAPREDUCE-2350.patch
>
>
> LocalJobRunner uses the "mapred.output.committer.class" configuration 
> property to retrieve the output committer for the job, which can be different 
> from the Output Committer returned from 
> OutputFormat.getOutputCommitter(TaskAttemptContext context). So, two 
> different output committers can be used in the same job.
> See line 324 in org.apache.hadoop.mapred.LocalJobRunner: OutputCommitter 
> outputCommitter = job.getOutputCommitter();
> Need to modify this behavior to check if the new or the old API is used, and 
> then return the correct output committer. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6228) Add truncate operation to SLive

2015-02-12 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317838#comment-14317838
 ] 

Konstantin Shvachko commented on MAPREDUCE-6228:


Few minor things:
# Long line defining {{ConfigOption WAIT_ON_TRUNCATE}}
# Lets increment {{PROG_VERSION = "0.0.2";}} to {{0.1.0}}. I missed it last 
time.
#  {{0M}} and {{Constants.MEGABYTES * 0}} is just {{0}}
# The unit test is passing for me.
# Could you please confirm that it has been tried on a cluster with truncate on.

> Add truncate operation to SLive
> ---
>
> Key: MAPREDUCE-6228
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6228
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: benchmarks, test
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: MAPREDUCE-6228.patch, MAPREDUCE-6228.patch
>
>
> Add truncate into the mix of operations for SLive test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6253) Update use of Iterator to Iterable

2015-02-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14317794#comment-14317794
 ] 

Hudson commented on MAPREDUCE-6253:
---

FAILURE: Integrated in Hadoop-trunk-Commit #7084 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7084/])
MAPREDUCE-6253. Update use of Iterator to Iterable. Contributed by Ray 
(devaraj: rev 76e309ead01f02b32335330cd920536f907fb71f)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedJob.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java


> Update use of Iterator to Iterable
> --
>
> Key: MAPREDUCE-6253
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6253
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-6253.001.patch
>
>
> Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6253) Update use of Iterator to Iterable

2015-02-12 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated MAPREDUCE-6253:
-
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

Thanks [~rchiang].

> Update use of Iterator to Iterable
> --
>
> Key: MAPREDUCE-6253
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6253
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: MAPREDUCE-6253.001.patch
>
>
> Found these using the IntelliJ Findbugs-IDEA plugin, which uses findbugs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)