[jira] [Commented] (MAPREDUCE-6351) Reducer hung in copy phase.

2015-05-05 Thread Laxman (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528154#comment-14528154
 ] 

Laxman commented on MAPREDUCE-6351:
---

Thanks a lot Jason for details. We are hitting exactly same scenario (disk bad) 
as explained in MAPREDUCE-6334.
We will try the patch and update the details in this jira.



> Reducer hung in copy phase.
> ---
>
> Key: MAPREDUCE-6351
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6351
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 2.6.0
>Reporter: Laxman
> Attachments: jstat-gc.log, reducer-container-partial.log.zip, 
> thread-dumps.out
>
>
> *Problem*
> Reducer gets stuck in copy phase and doesn't make progress for very long 
> time. After killing this task for couple of times manually, it gets 
> completed. 
> *Observations*
> - Verfied gc logs. Found no memory related issues. Attached the logs.
> - Verified thread dumps. Found no thread related problems. 
> - On verification of logs, fetcher threads are not copying the map outputs 
> and they are just waiting for merge to happen.
> - Merge thread is alive and in wait state.
> *Analysis* 
> On careful observation of logs, thread dumps and code, this looks to me like 
> a classic case of multi-threading issue. Thread goes to wait state after it 
> has been notified. 
> Here is the suspect code flow.
> *Thread #1*
> Fetcher thread - notification comes first
> org.apache.hadoop.mapreduce.task.reduce.MergeThread.startMerge(Set)
> {code}
>   synchronized(pendingToBeMerged) {
> pendingToBeMerged.addLast(toMergeInputs);
> pendingToBeMerged.notifyAll();
>   }
> {code}
> *Thread #2*
> Merge Thread - goes to wait state (Notification goes unconsumed)
> org.apache.hadoop.mapreduce.task.reduce.MergeThread.run()
> {code}
> synchronized (pendingToBeMerged) {
>   while(pendingToBeMerged.size() <= 0) {
> pendingToBeMerged.wait();
>   }
>   // Pickup the inputs to merge.
>   inputs = pendingToBeMerged.removeFirst();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6259) IllegalArgumentException due to missing job submit time

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528289#comment-14528289
 ] 

Hudson commented on MAPREDUCE-6259:
---

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #184 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/184/])
MAPREDUCE-6259. IllegalArgumentException due to missing job submit time. 
Contributed by zhihai xu (jlowe: rev bf70c5ae2824a9139c1aa9d7c14020018881cec2)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/AMStartedEvent.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* hadoop-mapreduce-project/CHANGES.txt


> IllegalArgumentException due to missing job submit time
> ---
>
> Key: MAPREDUCE-6259
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6259
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.7.1
>
> Attachments: MAPREDUCE-6259.000.patch
>
>
> -1 job submit time cause IllegalArgumentException when parse the Job history 
> file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
> We found the following job history file name which cause 
> IllegalArgumentException when parse the job status in the job history file 
> name.
> {code}
> job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
> {code}
> The stack trace for the IllegalArgumentException is
> {code}
> 2015-02-10 04:54:01,863 WARN org.apache.hadoop.mapreduce.v2.hs.PartialJob: 
> Exception while parsing job state. Defaulting to KILLED
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.0
>   at java.lang.Enum.valueOf(Enum.java:236)
>   at 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.valueOf(JobState.java:21)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.getState(PartialJob.java:82)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.(PartialJob.java:59)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getAllPartialJobs(CachedHistoryStorage.java:159)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getPartialJobs(CachedHistoryStorage.java:173)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.JobHistory.getPartialJobs(JobHistory.java:284)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices.getJobs(HsWebServices.java:212)
>   at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
>   at 
> com.sun.jersey.spi.container.servlet.ServletCont

[jira] [Commented] (MAPREDUCE-6165) [JDK8] TestCombineFileInputFormat failed on JDK8

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528287#comment-14528287
 ] 

Hudson commented on MAPREDUCE-6165:
---

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #184 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/184/])
MAPREDUCE-6165. [JDK8] TestCombineFileInputFormat failed on JDK8. Contributed 
by Akira AJISAKA. (ozawa: rev 551615fa13f65ae996bae9c1bacff189539b6557)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java


> [JDK8] TestCombineFileInputFormat failed on JDK8
> 
>
> Key: MAPREDUCE-6165
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6165
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Wei Yan
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6165-001.patch, MAPREDUCE-6165-002.patch, 
> MAPREDUCE-6165-003.patch, MAPREDUCE-6165-003.patch, MAPREDUCE-6165-004.patch, 
> MAPREDUCE-6165-reproduce.patch
>
>
> The error msg:
> {noformat}
> testSplitPlacementForCompressedFiles(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 2.487 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles(TestCombineFileInputFormat.java:911)
> testSplitPlacement(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 0.985 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement(TestCombineFileInputFormat.java:368)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5649) Reduce cannot use more than 2G memory for the final merge

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528295#comment-14528295
 ] 

Hudson commented on MAPREDUCE-5649:
---

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #184 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/184/])
MAPREDUCE-5649. Reduce cannot use more than 2G memory for the final merge. 
Contributed by Gera Shegalov (jlowe: rev 
7dc3c1203d1ab14c09d0aaf0869a5bcdfafb0a5a)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMergeManager.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java


> Reduce cannot use more than 2G memory  for the final merge
> --
>
> Key: MAPREDUCE-5649
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5649
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: stanley shi
>Assignee: Gera Shegalov
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-5649.001.patch, MAPREDUCE-5649.002.patch, 
> MAPREDUCE-5649.003.patch
>
>
> In the org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.java file, in 
> the finalMerge method: 
>  int maxInMemReduce = (int)Math.min(
> Runtime.getRuntime().maxMemory() * maxRedPer, Integer.MAX_VALUE);
>  
> This means no matter how much memory user has, reducer will not retain more 
> than 2G data in memory before the reduce phase starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6259) IllegalArgumentException due to missing job submit time

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528323#comment-14528323
 ] 

Hudson commented on MAPREDUCE-6259:
---

SUCCESS: Integrated in Hadoop-Yarn-trunk #918 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/918/])
MAPREDUCE-6259. IllegalArgumentException due to missing job submit time. 
Contributed by zhihai xu (jlowe: rev bf70c5ae2824a9139c1aa9d7c14020018881cec2)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/AMStartedEvent.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java


> IllegalArgumentException due to missing job submit time
> ---
>
> Key: MAPREDUCE-6259
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6259
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.7.1
>
> Attachments: MAPREDUCE-6259.000.patch
>
>
> -1 job submit time cause IllegalArgumentException when parse the Job history 
> file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
> We found the following job history file name which cause 
> IllegalArgumentException when parse the job status in the job history file 
> name.
> {code}
> job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
> {code}
> The stack trace for the IllegalArgumentException is
> {code}
> 2015-02-10 04:54:01,863 WARN org.apache.hadoop.mapreduce.v2.hs.PartialJob: 
> Exception while parsing job state. Defaulting to KILLED
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.0
>   at java.lang.Enum.valueOf(Enum.java:236)
>   at 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.valueOf(JobState.java:21)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.getState(PartialJob.java:82)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.(PartialJob.java:59)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getAllPartialJobs(CachedHistoryStorage.java:159)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getPartialJobs(CachedHistoryStorage.java:173)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.JobHistory.getPartialJobs(JobHistory.java:284)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices.getJobs(HsWebServices.java:212)
>   at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.servic

[jira] [Commented] (MAPREDUCE-6165) [JDK8] TestCombineFileInputFormat failed on JDK8

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528321#comment-14528321
 ] 

Hudson commented on MAPREDUCE-6165:
---

SUCCESS: Integrated in Hadoop-Yarn-trunk #918 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/918/])
MAPREDUCE-6165. [JDK8] TestCombineFileInputFormat failed on JDK8. Contributed 
by Akira AJISAKA. (ozawa: rev 551615fa13f65ae996bae9c1bacff189539b6557)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
* hadoop-mapreduce-project/CHANGES.txt


> [JDK8] TestCombineFileInputFormat failed on JDK8
> 
>
> Key: MAPREDUCE-6165
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6165
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Wei Yan
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6165-001.patch, MAPREDUCE-6165-002.patch, 
> MAPREDUCE-6165-003.patch, MAPREDUCE-6165-003.patch, MAPREDUCE-6165-004.patch, 
> MAPREDUCE-6165-reproduce.patch
>
>
> The error msg:
> {noformat}
> testSplitPlacementForCompressedFiles(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 2.487 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles(TestCombineFileInputFormat.java:911)
> testSplitPlacement(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 0.985 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement(TestCombineFileInputFormat.java:368)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5649) Reduce cannot use more than 2G memory for the final merge

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528329#comment-14528329
 ] 

Hudson commented on MAPREDUCE-5649:
---

SUCCESS: Integrated in Hadoop-Yarn-trunk #918 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/918/])
MAPREDUCE-5649. Reduce cannot use more than 2G memory for the final merge. 
Contributed by Gera Shegalov (jlowe: rev 
7dc3c1203d1ab14c09d0aaf0869a5bcdfafb0a5a)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMergeManager.java


> Reduce cannot use more than 2G memory  for the final merge
> --
>
> Key: MAPREDUCE-5649
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5649
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: stanley shi
>Assignee: Gera Shegalov
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-5649.001.patch, MAPREDUCE-5649.002.patch, 
> MAPREDUCE-5649.003.patch
>
>
> In the org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.java file, in 
> the finalMerge method: 
>  int maxInMemReduce = (int)Math.min(
> Runtime.getRuntime().maxMemory() * maxRedPer, Integer.MAX_VALUE);
>  
> This means no matter how much memory user has, reducer will not retain more 
> than 2G data in memory before the reduce phase starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6356) Misspelling of threshold in log4j.properties for tests

2015-05-05 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created MAPREDUCE-6356:
---

 Summary: Misspelling of threshold in log4j.properties for tests
 Key: MAPREDUCE-6356
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6356
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor


log4j.properties file for test contains misspelling "log4j.threshhold".
We should use "log4j.threshold" correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6354) shuffle handler should log connection info

2015-05-05 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated MAPREDUCE-6354:
--
Target Version/s: 2.8.0
  Status: Open  (was: Patch Available)

OK, let's go with the approach of adding an audit log for the ShuffleHandler.  
To further mitigate concerns, let's set this up as a disabled log (e.g.: only 
emits at the WARN level by default but users can modify log properties to 
enable INFO logging and redirect to another file if desired).

> shuffle handler should log connection info
> --
>
> Key: MAPREDUCE-6354
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6354
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: MAPREDUCE-6354.2.patch, MAPREDUCE-6354.3.patch, 
> MAPREDUCE-6354.patch
>
>
> currently, shuffle handler only log connection info in debug mode, we want to 
> log that info in a more concise way



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6356) Misspelling of threshold in log4j.properties for tests

2015-05-05 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated MAPREDUCE-6356:

Attachment: MAPREDUCE-6356.patch

> Misspelling of threshold in log4j.properties for tests
> --
>
> Key: MAPREDUCE-6356
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6356
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: MAPREDUCE-6356.patch
>
>
> log4j.properties file for test contains misspelling "log4j.threshhold".
> We should use "log4j.threshold" correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6356) Misspelling of threshold in log4j.properties for tests

2015-05-05 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528480#comment-14528480
 ] 

Brahma Reddy Battula commented on MAPREDUCE-6356:
-

Attached patch fix the typo in log4j.properties..

> Misspelling of threshold in log4j.properties for tests
> --
>
> Key: MAPREDUCE-6356
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6356
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: MAPREDUCE-6356.patch
>
>
> log4j.properties file for test contains misspelling "log4j.threshhold".
> We should use "log4j.threshold" correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6356) Misspelling of threshold in log4j.properties for tests

2015-05-05 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated MAPREDUCE-6356:

Status: Patch Available  (was: Open)

> Misspelling of threshold in log4j.properties for tests
> --
>
> Key: MAPREDUCE-6356
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6356
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: MAPREDUCE-6356.patch
>
>
> log4j.properties file for test contains misspelling "log4j.threshhold".
> We should use "log4j.threshold" correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6165) [JDK8] TestCombineFileInputFormat failed on JDK8

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528536#comment-14528536
 ] 

Hudson commented on MAPREDUCE-6165:
---

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2116 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2116/])
MAPREDUCE-6165. [JDK8] TestCombineFileInputFormat failed on JDK8. Contributed 
by Akira AJISAKA. (ozawa: rev 551615fa13f65ae996bae9c1bacff189539b6557)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java
* hadoop-mapreduce-project/CHANGES.txt


> [JDK8] TestCombineFileInputFormat failed on JDK8
> 
>
> Key: MAPREDUCE-6165
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6165
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Wei Yan
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6165-001.patch, MAPREDUCE-6165-002.patch, 
> MAPREDUCE-6165-003.patch, MAPREDUCE-6165-003.patch, MAPREDUCE-6165-004.patch, 
> MAPREDUCE-6165-reproduce.patch
>
>
> The error msg:
> {noformat}
> testSplitPlacementForCompressedFiles(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 2.487 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles(TestCombineFileInputFormat.java:911)
> testSplitPlacement(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 0.985 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement(TestCombineFileInputFormat.java:368)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6259) IllegalArgumentException due to missing job submit time

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528538#comment-14528538
 ] 

Hudson commented on MAPREDUCE-6259:
---

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2116 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2116/])
MAPREDUCE-6259. IllegalArgumentException due to missing job submit time. 
Contributed by zhihai xu (jlowe: rev bf70c5ae2824a9139c1aa9d7c14020018881cec2)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/AMStartedEvent.java
* hadoop-mapreduce-project/CHANGES.txt


> IllegalArgumentException due to missing job submit time
> ---
>
> Key: MAPREDUCE-6259
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6259
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.7.1
>
> Attachments: MAPREDUCE-6259.000.patch
>
>
> -1 job submit time cause IllegalArgumentException when parse the Job history 
> file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
> We found the following job history file name which cause 
> IllegalArgumentException when parse the job status in the job history file 
> name.
> {code}
> job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
> {code}
> The stack trace for the IllegalArgumentException is
> {code}
> 2015-02-10 04:54:01,863 WARN org.apache.hadoop.mapreduce.v2.hs.PartialJob: 
> Exception while parsing job state. Defaulting to KILLED
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.0
>   at java.lang.Enum.valueOf(Enum.java:236)
>   at 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.valueOf(JobState.java:21)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.getState(PartialJob.java:82)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.(PartialJob.java:59)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getAllPartialJobs(CachedHistoryStorage.java:159)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getPartialJobs(CachedHistoryStorage.java:173)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.JobHistory.getPartialJobs(JobHistory.java:284)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices.getJobs(HsWebServices.java:212)
>   at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.serv

[jira] [Commented] (MAPREDUCE-5649) Reduce cannot use more than 2G memory for the final merge

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528544#comment-14528544
 ] 

Hudson commented on MAPREDUCE-5649:
---

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2116 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2116/])
MAPREDUCE-5649. Reduce cannot use more than 2G memory for the final merge. 
Contributed by Gera Shegalov (jlowe: rev 
7dc3c1203d1ab14c09d0aaf0869a5bcdfafb0a5a)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMergeManager.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java


> Reduce cannot use more than 2G memory  for the final merge
> --
>
> Key: MAPREDUCE-5649
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5649
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: stanley shi
>Assignee: Gera Shegalov
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-5649.001.patch, MAPREDUCE-5649.002.patch, 
> MAPREDUCE-5649.003.patch
>
>
> In the org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.java file, in 
> the finalMerge method: 
>  int maxInMemReduce = (int)Math.min(
> Runtime.getRuntime().maxMemory() * maxRedPer, Integer.MAX_VALUE);
>  
> This means no matter how much memory user has, reducer will not retain more 
> than 2G data in memory before the reduce phase starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6259) IllegalArgumentException due to missing job submit time

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528558#comment-14528558
 ] 

Hudson commented on MAPREDUCE-6259:
---

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #175 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/175/])
MAPREDUCE-6259. IllegalArgumentException due to missing job submit time. 
Contributed by zhihai xu (jlowe: rev bf70c5ae2824a9139c1aa9d7c14020018881cec2)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/AMStartedEvent.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java


> IllegalArgumentException due to missing job submit time
> ---
>
> Key: MAPREDUCE-6259
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6259
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.7.1
>
> Attachments: MAPREDUCE-6259.000.patch
>
>
> -1 job submit time cause IllegalArgumentException when parse the Job history 
> file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
> We found the following job history file name which cause 
> IllegalArgumentException when parse the job status in the job history file 
> name.
> {code}
> job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
> {code}
> The stack trace for the IllegalArgumentException is
> {code}
> 2015-02-10 04:54:01,863 WARN org.apache.hadoop.mapreduce.v2.hs.PartialJob: 
> Exception while parsing job state. Defaulting to KILLED
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.0
>   at java.lang.Enum.valueOf(Enum.java:236)
>   at 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.valueOf(JobState.java:21)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.getState(PartialJob.java:82)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.(PartialJob.java:59)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getAllPartialJobs(CachedHistoryStorage.java:159)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getPartialJobs(CachedHistoryStorage.java:173)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.JobHistory.getPartialJobs(JobHistory.java:284)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices.getJobs(HsWebServices.java:212)
>   at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
>   at 
> com.sun.jersey.spi.container.servlet.ServletCont

[jira] [Commented] (MAPREDUCE-6165) [JDK8] TestCombineFileInputFormat failed on JDK8

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528556#comment-14528556
 ] 

Hudson commented on MAPREDUCE-6165:
---

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #175 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/175/])
MAPREDUCE-6165. [JDK8] TestCombineFileInputFormat failed on JDK8. Contributed 
by Akira AJISAKA. (ozawa: rev 551615fa13f65ae996bae9c1bacff189539b6557)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java


> [JDK8] TestCombineFileInputFormat failed on JDK8
> 
>
> Key: MAPREDUCE-6165
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6165
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Wei Yan
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6165-001.patch, MAPREDUCE-6165-002.patch, 
> MAPREDUCE-6165-003.patch, MAPREDUCE-6165-003.patch, MAPREDUCE-6165-004.patch, 
> MAPREDUCE-6165-reproduce.patch
>
>
> The error msg:
> {noformat}
> testSplitPlacementForCompressedFiles(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 2.487 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles(TestCombineFileInputFormat.java:911)
> testSplitPlacement(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 0.985 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement(TestCombineFileInputFormat.java:368)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5649) Reduce cannot use more than 2G memory for the final merge

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528564#comment-14528564
 ] 

Hudson commented on MAPREDUCE-5649:
---

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #175 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/175/])
MAPREDUCE-5649. Reduce cannot use more than 2G memory for the final merge. 
Contributed by Gera Shegalov (jlowe: rev 
7dc3c1203d1ab14c09d0aaf0869a5bcdfafb0a5a)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMergeManager.java


> Reduce cannot use more than 2G memory  for the final merge
> --
>
> Key: MAPREDUCE-5649
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5649
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: stanley shi
>Assignee: Gera Shegalov
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-5649.001.patch, MAPREDUCE-5649.002.patch, 
> MAPREDUCE-5649.003.patch
>
>
> In the org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.java file, in 
> the finalMerge method: 
>  int maxInMemReduce = (int)Math.min(
> Runtime.getRuntime().maxMemory() * maxRedPer, Integer.MAX_VALUE);
>  
> This means no matter how much memory user has, reducer will not retain more 
> than 2G data in memory before the reduce phase starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5649) Reduce cannot use more than 2G memory for the final merge

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528615#comment-14528615
 ] 

Hudson commented on MAPREDUCE-5649:
---

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #185 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/185/])
MAPREDUCE-5649. Reduce cannot use more than 2G memory for the final merge. 
Contributed by Gera Shegalov (jlowe: rev 
7dc3c1203d1ab14c09d0aaf0869a5bcdfafb0a5a)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMergeManager.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java


> Reduce cannot use more than 2G memory  for the final merge
> --
>
> Key: MAPREDUCE-5649
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5649
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: stanley shi
>Assignee: Gera Shegalov
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-5649.001.patch, MAPREDUCE-5649.002.patch, 
> MAPREDUCE-5649.003.patch
>
>
> In the org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.java file, in 
> the finalMerge method: 
>  int maxInMemReduce = (int)Math.min(
> Runtime.getRuntime().maxMemory() * maxRedPer, Integer.MAX_VALUE);
>  
> This means no matter how much memory user has, reducer will not retain more 
> than 2G data in memory before the reduce phase starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6165) [JDK8] TestCombineFileInputFormat failed on JDK8

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528607#comment-14528607
 ] 

Hudson commented on MAPREDUCE-6165:
---

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #185 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/185/])
MAPREDUCE-6165. [JDK8] TestCombineFileInputFormat failed on JDK8. Contributed 
by Akira AJISAKA. (ozawa: rev 551615fa13f65ae996bae9c1bacff189539b6557)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
* hadoop-mapreduce-project/CHANGES.txt


> [JDK8] TestCombineFileInputFormat failed on JDK8
> 
>
> Key: MAPREDUCE-6165
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6165
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Wei Yan
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6165-001.patch, MAPREDUCE-6165-002.patch, 
> MAPREDUCE-6165-003.patch, MAPREDUCE-6165-003.patch, MAPREDUCE-6165-004.patch, 
> MAPREDUCE-6165-reproduce.patch
>
>
> The error msg:
> {noformat}
> testSplitPlacementForCompressedFiles(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 2.487 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles(TestCombineFileInputFormat.java:911)
> testSplitPlacement(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 0.985 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement(TestCombineFileInputFormat.java:368)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6259) IllegalArgumentException due to missing job submit time

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528609#comment-14528609
 ] 

Hudson commented on MAPREDUCE-6259:
---

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #185 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/185/])
MAPREDUCE-6259. IllegalArgumentException due to missing job submit time. 
Contributed by zhihai xu (jlowe: rev bf70c5ae2824a9139c1aa9d7c14020018881cec2)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/AMStartedEvent.java


> IllegalArgumentException due to missing job submit time
> ---
>
> Key: MAPREDUCE-6259
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6259
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.7.1
>
> Attachments: MAPREDUCE-6259.000.patch
>
>
> -1 job submit time cause IllegalArgumentException when parse the Job history 
> file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
> We found the following job history file name which cause 
> IllegalArgumentException when parse the job status in the job history file 
> name.
> {code}
> job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
> {code}
> The stack trace for the IllegalArgumentException is
> {code}
> 2015-02-10 04:54:01,863 WARN org.apache.hadoop.mapreduce.v2.hs.PartialJob: 
> Exception while parsing job state. Defaulting to KILLED
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.0
>   at java.lang.Enum.valueOf(Enum.java:236)
>   at 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.valueOf(JobState.java:21)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.getState(PartialJob.java:82)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.(PartialJob.java:59)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getAllPartialJobs(CachedHistoryStorage.java:159)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getPartialJobs(CachedHistoryStorage.java:173)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.JobHistory.getPartialJobs(JobHistory.java:284)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices.getJobs(HsWebServices.java:212)
>   at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
>   at 
> com.sun.jersey.spi.container.servlet.S

[jira] [Commented] (MAPREDUCE-5649) Reduce cannot use more than 2G memory for the final merge

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528657#comment-14528657
 ] 

Hudson commented on MAPREDUCE-5649:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2134 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2134/])
MAPREDUCE-5649. Reduce cannot use more than 2G memory for the final merge. 
Contributed by Gera Shegalov (jlowe: rev 
7dc3c1203d1ab14c09d0aaf0869a5bcdfafb0a5a)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMergeManager.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java


> Reduce cannot use more than 2G memory  for the final merge
> --
>
> Key: MAPREDUCE-5649
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5649
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: stanley shi
>Assignee: Gera Shegalov
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-5649.001.patch, MAPREDUCE-5649.002.patch, 
> MAPREDUCE-5649.003.patch
>
>
> In the org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.java file, in 
> the finalMerge method: 
>  int maxInMemReduce = (int)Math.min(
> Runtime.getRuntime().maxMemory() * maxRedPer, Integer.MAX_VALUE);
>  
> This means no matter how much memory user has, reducer will not retain more 
> than 2G data in memory before the reduce phase starts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6165) [JDK8] TestCombineFileInputFormat failed on JDK8

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528649#comment-14528649
 ] 

Hudson commented on MAPREDUCE-6165:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2134 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2134/])
MAPREDUCE-6165. [JDK8] TestCombineFileInputFormat failed on JDK8. Contributed 
by Akira AJISAKA. (ozawa: rev 551615fa13f65ae996bae9c1bacff189539b6557)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/input/CombineFileInputFormat.java
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java


> [JDK8] TestCombineFileInputFormat failed on JDK8
> 
>
> Key: MAPREDUCE-6165
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6165
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Wei Yan
>Assignee: Akira AJISAKA
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6165-001.patch, MAPREDUCE-6165-002.patch, 
> MAPREDUCE-6165-003.patch, MAPREDUCE-6165-003.patch, MAPREDUCE-6165-004.patch, 
> MAPREDUCE-6165-reproduce.patch
>
>
> The error msg:
> {noformat}
> testSplitPlacementForCompressedFiles(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 2.487 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacementForCompressedFiles(TestCombineFileInputFormat.java:911)
> testSplitPlacement(org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat)
>   Time elapsed: 0.985 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat.testSplitPlacement(TestCombineFileInputFormat.java:368)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6259) IllegalArgumentException due to missing job submit time

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528651#comment-14528651
 ] 

Hudson commented on MAPREDUCE-6259:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2134 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2134/])
MAPREDUCE-6259. IllegalArgumentException due to missing job submit time. 
Contributed by zhihai xu (jlowe: rev bf70c5ae2824a9139c1aa9d7c14020018881cec2)
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/AMStartedEvent.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEventHandler.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java
* hadoop-mapreduce-project/CHANGES.txt


> IllegalArgumentException due to missing job submit time
> ---
>
> Key: MAPREDUCE-6259
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6259
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.7.1
>
> Attachments: MAPREDUCE-6259.000.patch
>
>
> -1 job submit time cause IllegalArgumentException when parse the Job history 
> file name and JOB_INIT_FAILED cause -1 job submit time in JobIndexInfo.
> We found the following job history file name which cause 
> IllegalArgumentException when parse the job status in the job history file 
> name.
> {code}
> job_1418398645407_115853--1-worun-kafka%2Dto%2Dhdfs%5Btwo%5D%5B15+topic%28s%29%5D-1423572836007-0-0-FAILED-root.journaling-1423572836007.jhist
> {code}
> The stack trace for the IllegalArgumentException is
> {code}
> 2015-02-10 04:54:01,863 WARN org.apache.hadoop.mapreduce.v2.hs.PartialJob: 
> Exception while parsing job state. Defaulting to KILLED
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.0
>   at java.lang.Enum.valueOf(Enum.java:236)
>   at 
> org.apache.hadoop.mapreduce.v2.api.records.JobState.valueOf(JobState.java:21)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.getState(PartialJob.java:82)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.PartialJob.(PartialJob.java:59)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getAllPartialJobs(CachedHistoryStorage.java:159)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getPartialJobs(CachedHistoryStorage.java:173)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.JobHistory.getPartialJobs(JobHistory.java:284)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices.getJobs(HsWebServices.java:212)
>   at sun.reflect.GeneratedMethodAccessor63.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
>   at 
> com.sun.jersey.spi.container.servlet.ServletCont

[jira] [Updated] (MAPREDUCE-6279) AM should explicity exit JVM after all services have stopped

2015-05-05 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated MAPREDUCE-6279:
--
Attachment: MAPREDUCE-6279.v4.patch

Fixing whitespace errors.

> AM should explicity exit JVM after all services have stopped
> 
>
> Key: MAPREDUCE-6279
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6279
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 2.5.0
>Reporter: Jason Lowe
>Assignee: Eric Payne
> Attachments: MAPREDUCE-6279.v1.txt, MAPREDUCE-6279.v2.txt, 
> MAPREDUCE-6279.v3.patch, MAPREDUCE-6279.v4.patch
>
>
> Occasionally the MapReduce AM can "get stuck" trying to shut down.  
> MAPREDUCE-6049 and MAPREDUCE-5888 were specific instances that have been 
> fixed, but this can also occur with uber jobs if the task code inadvertently 
> leaves non-daemon threads lingering.
> We should explicitly shutdown the JVM after the MapReduce AM has unregistered 
> and all services have been stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6356) Misspelling of threshold in log4j.properties for tests

2015-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528780#comment-14528780
 ] 

Hadoop QA commented on MAPREDUCE-6356:
--

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 15s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 6 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | mapreduce tests |  10m 28s | Tests passed in 
hadoop-mapreduce-client-app. |
| {color:green}+1{color} | mapreduce tests |   0m 44s | Tests passed in 
hadoop-mapreduce-client-common. |
| {color:green}+1{color} | mapreduce tests |   1m 55s | Tests passed in 
hadoop-mapreduce-client-core. |
| {color:green}+1{color} | mapreduce tests |   6m 34s | Tests passed in 
hadoop-mapreduce-client-hs. |
| {color:green}+1{color} | mapreduce tests | 108m 28s | Tests passed in 
hadoop-mapreduce-client-jobclient. |
| {color:green}+1{color} | mapreduce tests |   0m 24s | Tests passed in 
hadoop-mapreduce-client-shuffle. |
| | | 143m 44s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730502/MAPREDUCE-6356.patch |
| Optional Tests | javac unit |
| git revision | trunk / 9356cf8 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5644/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| hadoop-mapreduce-client-common test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5644/artifact/patchprocess/testrun_hadoop-mapreduce-client-common.txt
 |
| hadoop-mapreduce-client-core test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5644/artifact/patchprocess/testrun_hadoop-mapreduce-client-core.txt
 |
| hadoop-mapreduce-client-hs test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5644/artifact/patchprocess/testrun_hadoop-mapreduce-client-hs.txt
 |
| hadoop-mapreduce-client-jobclient test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5644/artifact/patchprocess/testrun_hadoop-mapreduce-client-jobclient.txt
 |
| hadoop-mapreduce-client-shuffle test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5644/artifact/patchprocess/testrun_hadoop-mapreduce-client-shuffle.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5644/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5644/console |


This message was automatically generated.

> Misspelling of threshold in log4j.properties for tests
> --
>
> Key: MAPREDUCE-6356
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6356
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Minor
> Attachments: MAPREDUCE-6356.patch
>
>
> log4j.properties file for test contains misspelling "log4j.threshhold".
> We should use "log4j.threshold" correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6279) AM should explicity exit JVM after all services have stopped

2015-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528878#comment-14528878
 ] 

Hadoop QA commented on MAPREDUCE-6279:
--

\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 33s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 37s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  0s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 39s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 52s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 15s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | mapreduce tests |  13m  8s | Tests passed in 
hadoop-mapreduce-client-app. |
| | |  52m 12s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730530/MAPREDUCE-6279.v4.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 05adc76 |
| hadoop-mapreduce-client-app test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5645/artifact/patchprocess/testrun_hadoop-mapreduce-client-app.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5645/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5645/console |


This message was automatically generated.

> AM should explicity exit JVM after all services have stopped
> 
>
> Key: MAPREDUCE-6279
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6279
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 2.5.0
>Reporter: Jason Lowe
>Assignee: Eric Payne
> Attachments: MAPREDUCE-6279.v1.txt, MAPREDUCE-6279.v2.txt, 
> MAPREDUCE-6279.v3.patch, MAPREDUCE-6279.v4.patch
>
>
> Occasionally the MapReduce AM can "get stuck" trying to shut down.  
> MAPREDUCE-6049 and MAPREDUCE-5888 were specific instances that have been 
> fixed, but this can also occur with uber jobs if the task code inadvertently 
> leaves non-daemon threads lingering.
> We should explicitly shutdown the JVM after the MapReduce AM has unregistered 
> and all services have been stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6304) Specifying node labels when submitting MR jobs

2015-05-05 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528957#comment-14528957
 ] 

Wangda Tan commented on MAPREDUCE-6304:
---

[~Naganarasimha], thanks for pointing me about yarn.ipc.*.factory, etc. I think 
it's important to
- Not bring in additional unncessary default config
- Follow what we have in *default.xml
- Make admin easy to understand

So I think it's fine to do as what you suggested, but could you please mention 
in description that, by default the node-label-expression for job is not set, 
it will use queue's default-node-label-expression.

> Specifying node labels when submitting MR jobs
> --
>
> Key: MAPREDUCE-6304
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6304
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Jian Fang
>Assignee: Naganarasimha G R
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6304.20150410-1.patch, 
> MAPREDUCE-6304.20150411-1.patch, MAPREDUCE-6304.20150501-1.patch
>
>
> Per the discussion on YARN-796, we need a mechanism in MAPREDUCE to specify 
> node labels when submitting MR jobs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6357) MultipleOutputs.write() API should document that output committing is not utilized when input path is absolute

2015-05-05 Thread Ivan Mitic (JIRA)
Ivan Mitic created MAPREDUCE-6357:
-

 Summary: MultipleOutputs.write() API should document that output 
committing is not utilized when input path is absolute
 Key: MAPREDUCE-6357
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6357
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


After spending the afternoon debugging a user job where reduce tasks were 
failing on retry with the below exception, I think it would be worthwhile to 
add a note in the MultipleOutputs.write() documentation, saying that absolute 
paths may cause improper execution of tasks on retry or when MR speculative 
execution is enabled. 

{code}
2015-04-28 23:13:10,452 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.io.IOException: File already 
exists:wasb://full20150...@bgtstoragefull.blob.core.windows.net/user/hadoop/some/path/block-r-00299.bz2
   at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1354)
   at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1195)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
   at 
org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:135)
   at 
org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.getRecordWriter(MultipleOutputs.java:475)
   at 
org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.write(MultipleOutputs.java:433)
   at 
com.ancestry.bigtree.hadoop.LevelReducer.processValue(LevelReducer.java:91)
   at com.ancestry.bigtree.hadoop.LevelReducer.reduce(LevelReducer.java:69)
   at com.ancestry.bigtree.hadoop.LevelReducer.reduce(LevelReducer.java:14)
   at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
   at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
{code}

As discussed in MAPREDUCE-3772, when the baseOutputPath passed to 
MultipleOutputs.write() is an absolute path (or more precisely a path that 
resolves outside of the job output-dir), the concept of output committing is 
not utilized. 

In this case, the user read thru the MultipleOutputs docs and was assuming that 
everything will be working fine, as there are blog posts saying that 
MultipleOutputs does handle output commit. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6174) Combine common stream code into parent class for InMemoryMapOutput and OnDiskMapOutput.

2015-05-05 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated MAPREDUCE-6174:
--
Attachment: MAPREDUCE-6174.003.patch

Fix style and whitespace warnings

> Combine common stream code into parent class for InMemoryMapOutput and 
> OnDiskMapOutput.
> ---
>
> Key: MAPREDUCE-6174
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6174
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Affects Versions: 3.0.0, 2.6.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: MAPREDUCE-6174.002.patch, MAPREDUCE-6174.003.patch, 
> MAPREDUCE-6174.v1.txt
>
>
> Per MAPREDUCE-6166, both InMemoryMapOutput and OnDiskMapOutput will be doing 
> similar things with regards to IFile streams.
> In order to make it explicit that InMemoryMapOutput and OnDiskMapOutput are 
> different from 3rd-party implementations, this JIRA will make them subclass a 
> common class (see 
> https://issues.apache.org/jira/browse/MAPREDUCE-6166?focusedCommentId=14223368&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14223368)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6174) Combine common stream code into parent class for InMemoryMapOutput and OnDiskMapOutput.

2015-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529186#comment-14529186
 ] 

Hadoop QA commented on MAPREDUCE-6174:
--

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 36s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 28s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 33s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 47s | The applied patch generated  2 
new checkstyle issues (total was 8, now 8). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 15s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | mapreduce tests |   1m 37s | Tests passed in 
hadoop-mapreduce-client-core. |
| | |  37m 56s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730585/MAPREDUCE-6174.003.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / ffce9a3 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5646/artifact/patchprocess/diffcheckstylehadoop-mapreduce-client-core.txt
 |
| hadoop-mapreduce-client-core test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5646/artifact/patchprocess/testrun_hadoop-mapreduce-client-core.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5646/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5646/console |


This message was automatically generated.

> Combine common stream code into parent class for InMemoryMapOutput and 
> OnDiskMapOutput.
> ---
>
> Key: MAPREDUCE-6174
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6174
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Affects Versions: 3.0.0, 2.6.0
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: MAPREDUCE-6174.002.patch, MAPREDUCE-6174.003.patch, 
> MAPREDUCE-6174.v1.txt
>
>
> Per MAPREDUCE-6166, both InMemoryMapOutput and OnDiskMapOutput will be doing 
> similar things with regards to IFile streams.
> In order to make it explicit that InMemoryMapOutput and OnDiskMapOutput are 
> different from 3rd-party implementations, this JIRA will make them subclass a 
> common class (see 
> https://issues.apache.org/jira/browse/MAPREDUCE-6166?focusedCommentId=14223368&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14223368)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-2094) org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements unsafe default behaviour that is different from the documented behaviour.

2015-05-05 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated MAPREDUCE-2094:

Attachment: MAPREDUCE-2094-20140727-svn-fixed-spaces.patch

I removed the trailing spaces from the lines I touched.
One line that was reported however is a line I didn't touch in my patch.

Question: What should I do about that?

> org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements 
> unsafe default behaviour that is different from the documented behaviour.
> ---
>
> Key: MAPREDUCE-2094
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2094
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: MAPREDUCE-2094-2011-05-19.patch, 
> MAPREDUCE-2094-20140727-svn-fixed-spaces.patch, 
> MAPREDUCE-2094-20140727-svn.patch, MAPREDUCE-2094-20140727.patch, 
> MAPREDUCE-2094-FileInputFormat-docs-v2.patch
>
>
> When implementing a custom derivative of FileInputFormat we ran into the 
> effect that a large Gzipped input file would be processed several times. 
> A near 1GiB file would be processed around 36 times in its entirety. Thus 
> producing garbage results and taking up a lot more CPU time than needed.
> It took a while to figure out and what we found is that the default 
> implementation of the isSplittable method in 
> [org.apache.hadoop.mapreduce.lib.input.FileInputFormat | 
> http://svn.apache.org/viewvc/hadoop/mapreduce/trunk/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java?view=markup
>  ] is simply "return true;". 
> This is a very unsafe default and is in contradiction with the JavaDoc of the 
> method which states: "Is the given filename splitable? Usually, true, but if 
> the file is stream compressed, it will not be. " . The actual implementation 
> effectively does "Is the given filename splitable? Always true, even if the 
> file is stream compressed using an unsplittable compression codec. "
> For our situation (where we always have Gzipped input) we took the easy way 
> out and simply implemented an isSplittable in our class that does "return 
> false; "
> Now there are essentially 3 ways I can think of for fixing this (in order of 
> what I would find preferable):
> # Implement something that looks at the used compression of the file (i.e. do 
> migrate the implementation from TextInputFormat to FileInputFormat). This 
> would make the method do what the JavaDoc describes.
> # "Force" developers to think about it and make this method abstract.
> # Use a "safe" default (i.e. return false)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-2094) org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements unsafe default behaviour that is different from the documented behaviour.

2015-05-05 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated MAPREDUCE-2094:

Status: Open  (was: Patch Available)

> org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements 
> unsafe default behaviour that is different from the documented behaviour.
> ---
>
> Key: MAPREDUCE-2094
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2094
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: MAPREDUCE-2094-2011-05-19.patch, 
> MAPREDUCE-2094-20140727-svn-fixed-spaces.patch, 
> MAPREDUCE-2094-20140727-svn.patch, MAPREDUCE-2094-20140727.patch, 
> MAPREDUCE-2094-FileInputFormat-docs-v2.patch
>
>
> When implementing a custom derivative of FileInputFormat we ran into the 
> effect that a large Gzipped input file would be processed several times. 
> A near 1GiB file would be processed around 36 times in its entirety. Thus 
> producing garbage results and taking up a lot more CPU time than needed.
> It took a while to figure out and what we found is that the default 
> implementation of the isSplittable method in 
> [org.apache.hadoop.mapreduce.lib.input.FileInputFormat | 
> http://svn.apache.org/viewvc/hadoop/mapreduce/trunk/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java?view=markup
>  ] is simply "return true;". 
> This is a very unsafe default and is in contradiction with the JavaDoc of the 
> method which states: "Is the given filename splitable? Usually, true, but if 
> the file is stream compressed, it will not be. " . The actual implementation 
> effectively does "Is the given filename splitable? Always true, even if the 
> file is stream compressed using an unsplittable compression codec. "
> For our situation (where we always have Gzipped input) we took the easy way 
> out and simply implemented an isSplittable in our class that does "return 
> false; "
> Now there are essentially 3 ways I can think of for fixing this (in order of 
> what I would find preferable):
> # Implement something that looks at the used compression of the file (i.e. do 
> migrate the implementation from TextInputFormat to FileInputFormat). This 
> would make the method do what the JavaDoc describes.
> # "Force" developers to think about it and make this method abstract.
> # Use a "safe" default (i.e. return false)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-2094) org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements unsafe default behaviour that is different from the documented behaviour.

2015-05-05 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated MAPREDUCE-2094:

Status: Patch Available  (was: Open)

Removed the trailing spaces from the lines I touched.

@[~aw]: Apparently the trailing spaces check is also triggered by the trailing 
space in one of the 'surrounding' lines in the patch file. How should this be 
handled?

> org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements 
> unsafe default behaviour that is different from the documented behaviour.
> ---
>
> Key: MAPREDUCE-2094
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2094
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: MAPREDUCE-2094-2011-05-19.patch, 
> MAPREDUCE-2094-20140727-svn-fixed-spaces.patch, 
> MAPREDUCE-2094-20140727-svn.patch, MAPREDUCE-2094-20140727.patch, 
> MAPREDUCE-2094-FileInputFormat-docs-v2.patch
>
>
> When implementing a custom derivative of FileInputFormat we ran into the 
> effect that a large Gzipped input file would be processed several times. 
> A near 1GiB file would be processed around 36 times in its entirety. Thus 
> producing garbage results and taking up a lot more CPU time than needed.
> It took a while to figure out and what we found is that the default 
> implementation of the isSplittable method in 
> [org.apache.hadoop.mapreduce.lib.input.FileInputFormat | 
> http://svn.apache.org/viewvc/hadoop/mapreduce/trunk/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java?view=markup
>  ] is simply "return true;". 
> This is a very unsafe default and is in contradiction with the JavaDoc of the 
> method which states: "Is the given filename splitable? Usually, true, but if 
> the file is stream compressed, it will not be. " . The actual implementation 
> effectively does "Is the given filename splitable? Always true, even if the 
> file is stream compressed using an unsplittable compression codec. "
> For our situation (where we always have Gzipped input) we took the easy way 
> out and simply implemented an isSplittable in our class that does "return 
> false; "
> Now there are essentially 3 ways I can think of for fixing this (in order of 
> what I would find preferable):
> # Implement something that looks at the used compression of the file (i.e. do 
> migrate the implementation from TextInputFormat to FileInputFormat). This 
> would make the method do what the JavaDoc describes.
> # "Force" developers to think about it and make this method abstract.
> # Use a "safe" default (i.e. return false)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-2094) org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements unsafe default behaviour that is different from the documented behaviour.

2015-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529260#comment-14529260
 ] 

Hadoop QA commented on MAPREDUCE-2094:
--

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730601/MAPREDUCE-2094-20140727-svn-fixed-spaces.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0100b15 |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5647/console |


This message was automatically generated.

> org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements 
> unsafe default behaviour that is different from the documented behaviour.
> ---
>
> Key: MAPREDUCE-2094
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2094
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: MAPREDUCE-2094-2011-05-19.patch, 
> MAPREDUCE-2094-20140727-svn-fixed-spaces.patch, 
> MAPREDUCE-2094-20140727-svn.patch, MAPREDUCE-2094-20140727.patch, 
> MAPREDUCE-2094-FileInputFormat-docs-v2.patch
>
>
> When implementing a custom derivative of FileInputFormat we ran into the 
> effect that a large Gzipped input file would be processed several times. 
> A near 1GiB file would be processed around 36 times in its entirety. Thus 
> producing garbage results and taking up a lot more CPU time than needed.
> It took a while to figure out and what we found is that the default 
> implementation of the isSplittable method in 
> [org.apache.hadoop.mapreduce.lib.input.FileInputFormat | 
> http://svn.apache.org/viewvc/hadoop/mapreduce/trunk/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java?view=markup
>  ] is simply "return true;". 
> This is a very unsafe default and is in contradiction with the JavaDoc of the 
> method which states: "Is the given filename splitable? Usually, true, but if 
> the file is stream compressed, it will not be. " . The actual implementation 
> effectively does "Is the given filename splitable? Always true, even if the 
> file is stream compressed using an unsplittable compression codec. "
> For our situation (where we always have Gzipped input) we took the easy way 
> out and simply implemented an isSplittable in our class that does "return 
> false; "
> Now there are essentially 3 ways I can think of for fixing this (in order of 
> what I would find preferable):
> # Implement something that looks at the used compression of the file (i.e. do 
> migrate the implementation from TextInputFormat to FileInputFormat). This 
> would make the method do what the JavaDoc describes.
> # "Force" developers to think about it and make this method abstract.
> # Use a "safe" default (i.e. return false)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-2094) org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements unsafe default behaviour that is different from the documented behaviour.

2015-05-05 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated MAPREDUCE-2094:

Status: Open  (was: Patch Available)

> org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements 
> unsafe default behaviour that is different from the documented behaviour.
> ---
>
> Key: MAPREDUCE-2094
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2094
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: MAPREDUCE-2094-2011-05-19.patch, 
> MAPREDUCE-2094-20140727-svn-fixed-spaces.patch, 
> MAPREDUCE-2094-20140727-svn.patch, MAPREDUCE-2094-20140727.patch, 
> MAPREDUCE-2094-FileInputFormat-docs-v2.patch
>
>
> When implementing a custom derivative of FileInputFormat we ran into the 
> effect that a large Gzipped input file would be processed several times. 
> A near 1GiB file would be processed around 36 times in its entirety. Thus 
> producing garbage results and taking up a lot more CPU time than needed.
> It took a while to figure out and what we found is that the default 
> implementation of the isSplittable method in 
> [org.apache.hadoop.mapreduce.lib.input.FileInputFormat | 
> http://svn.apache.org/viewvc/hadoop/mapreduce/trunk/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java?view=markup
>  ] is simply "return true;". 
> This is a very unsafe default and is in contradiction with the JavaDoc of the 
> method which states: "Is the given filename splitable? Usually, true, but if 
> the file is stream compressed, it will not be. " . The actual implementation 
> effectively does "Is the given filename splitable? Always true, even if the 
> file is stream compressed using an unsplittable compression codec. "
> For our situation (where we always have Gzipped input) we took the easy way 
> out and simply implemented an isSplittable in our class that does "return 
> false; "
> Now there are essentially 3 ways I can think of for fixing this (in order of 
> what I would find preferable):
> # Implement something that looks at the used compression of the file (i.e. do 
> migrate the implementation from TextInputFormat to FileInputFormat). This 
> would make the method do what the JavaDoc describes.
> # "Force" developers to think about it and make this method abstract.
> # Use a "safe" default (i.e. return false)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-2094) org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements unsafe default behaviour that is different from the documented behaviour.

2015-05-05 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated MAPREDUCE-2094:

Status: Patch Available  (was: Open)

> org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements 
> unsafe default behaviour that is different from the documented behaviour.
> ---
>
> Key: MAPREDUCE-2094
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2094
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: MAPREDUCE-2094-2011-05-19.patch, 
> MAPREDUCE-2094-20140727-svn-fixed-spaces.patch, 
> MAPREDUCE-2094-20140727-svn.patch, MAPREDUCE-2094-20140727.patch, 
> MAPREDUCE-2094-2015-05-05-2328.patch, 
> MAPREDUCE-2094-FileInputFormat-docs-v2.patch
>
>
> When implementing a custom derivative of FileInputFormat we ran into the 
> effect that a large Gzipped input file would be processed several times. 
> A near 1GiB file would be processed around 36 times in its entirety. Thus 
> producing garbage results and taking up a lot more CPU time than needed.
> It took a while to figure out and what we found is that the default 
> implementation of the isSplittable method in 
> [org.apache.hadoop.mapreduce.lib.input.FileInputFormat | 
> http://svn.apache.org/viewvc/hadoop/mapreduce/trunk/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java?view=markup
>  ] is simply "return true;". 
> This is a very unsafe default and is in contradiction with the JavaDoc of the 
> method which states: "Is the given filename splitable? Usually, true, but if 
> the file is stream compressed, it will not be. " . The actual implementation 
> effectively does "Is the given filename splitable? Always true, even if the 
> file is stream compressed using an unsplittable compression codec. "
> For our situation (where we always have Gzipped input) we took the easy way 
> out and simply implemented an isSplittable in our class that does "return 
> false; "
> Now there are essentially 3 ways I can think of for fixing this (in order of 
> what I would find preferable):
> # Implement something that looks at the used compression of the file (i.e. do 
> migrate the implementation from TextInputFormat to FileInputFormat). This 
> would make the method do what the JavaDoc describes.
> # "Force" developers to think about it and make this method abstract.
> # Use a "safe" default (i.e. return false)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-2094) org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements unsafe default behaviour that is different from the documented behaviour.

2015-05-05 Thread Niels Basjes (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niels Basjes updated MAPREDUCE-2094:

Attachment: MAPREDUCE-2094-2015-05-05-2328.patch

This should work

> org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements 
> unsafe default behaviour that is different from the documented behaviour.
> ---
>
> Key: MAPREDUCE-2094
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2094
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: MAPREDUCE-2094-2011-05-19.patch, 
> MAPREDUCE-2094-20140727-svn-fixed-spaces.patch, 
> MAPREDUCE-2094-20140727-svn.patch, MAPREDUCE-2094-20140727.patch, 
> MAPREDUCE-2094-2015-05-05-2328.patch, 
> MAPREDUCE-2094-FileInputFormat-docs-v2.patch
>
>
> When implementing a custom derivative of FileInputFormat we ran into the 
> effect that a large Gzipped input file would be processed several times. 
> A near 1GiB file would be processed around 36 times in its entirety. Thus 
> producing garbage results and taking up a lot more CPU time than needed.
> It took a while to figure out and what we found is that the default 
> implementation of the isSplittable method in 
> [org.apache.hadoop.mapreduce.lib.input.FileInputFormat | 
> http://svn.apache.org/viewvc/hadoop/mapreduce/trunk/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java?view=markup
>  ] is simply "return true;". 
> This is a very unsafe default and is in contradiction with the JavaDoc of the 
> method which states: "Is the given filename splitable? Usually, true, but if 
> the file is stream compressed, it will not be. " . The actual implementation 
> effectively does "Is the given filename splitable? Always true, even if the 
> file is stream compressed using an unsplittable compression codec. "
> For our situation (where we always have Gzipped input) we took the easy way 
> out and simply implemented an isSplittable in our class that does "return 
> false; "
> Now there are essentially 3 ways I can think of for fixing this (in order of 
> what I would find preferable):
> # Implement something that looks at the used compression of the file (i.e. do 
> migrate the implementation from TextInputFormat to FileInputFormat). This 
> would make the method do what the JavaDoc describes.
> # "Force" developers to think about it and make this method abstract.
> # Use a "safe" default (i.e. return false)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6349) Fix typo in property org.apache.hadoop.mapreduce.lib.chain.Chain.REDUCER_INPUT_VALUE_CLASS

2015-05-05 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529335#comment-14529335
 ] 

Ray Chiang commented on MAPREDUCE-6349:
---

Thanks for the review and the commit!

> Fix typo in property 
> org.apache.hadoop.mapreduce.lib.chain.Chain.REDUCER_INPUT_VALUE_CLASS
> --
>
> Key: MAPREDUCE-6349
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6349
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: BB2015-05-TBR, newbie
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6349.001.patch
>
>
> Ran across this typo in a property.  It doesn't look like it's used anywhere 
> externally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6192) Create unit test to automatically compare MR related classes and mapred-default.xml

2015-05-05 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated MAPREDUCE-6192:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Ray.  Committed to trunk and branch-2!

> Create unit test to automatically compare MR related classes and 
> mapred-default.xml
> ---
>
> Key: MAPREDUCE-6192
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6192
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6192.001.patch, MAPREDUCE-6192.002.patch, 
> MAPREDUCE-6192.003.patch, MAPREDUCE-6192.004.patch, MAPREDUCE-6192.005.patch, 
> MAPREDUCE-6192.006.patch, MAPREDUCE-6192.007.patch, 
> MAPREDUCE-6192.branch-2.007.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> MapReduce related classes and mapred-default.xml. It should throw an error if 
> a property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6192) Create unit test to automatically compare MR related classes and mapred-default.xml

2015-05-05 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529353#comment-14529353
 ] 

Robert Kanter commented on MAPREDUCE-6192:
--

+1

> Create unit test to automatically compare MR related classes and 
> mapred-default.xml
> ---
>
> Key: MAPREDUCE-6192
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6192
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6192.001.patch, MAPREDUCE-6192.002.patch, 
> MAPREDUCE-6192.003.patch, MAPREDUCE-6192.004.patch, MAPREDUCE-6192.005.patch, 
> MAPREDUCE-6192.006.patch, MAPREDUCE-6192.007.patch, 
> MAPREDUCE-6192.branch-2.007.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> MapReduce related classes and mapred-default.xml. It should throw an error if 
> a property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6192) Create unit test to automatically compare MR related classes and mapred-default.xml

2015-05-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529376#comment-14529376
 ] 

Hudson commented on MAPREDUCE-6192:
---

FAILURE: Integrated in Hadoop-trunk-Commit #7741 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7741/])
MAPREDUCE-6192. Create unit test to automatically compare MR related classes 
and mapred-default.xml (rchiang via rkanter) (rkanter: rev 
9809a16d3c8068beccbf0106e99c7ede6ba11e0f)
* hadoop-mapreduce-project/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestMapreduceConfigFields.java


> Create unit test to automatically compare MR related classes and 
> mapred-default.xml
> ---
>
> Key: MAPREDUCE-6192
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6192
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 2.6.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6192.001.patch, MAPREDUCE-6192.002.patch, 
> MAPREDUCE-6192.003.patch, MAPREDUCE-6192.004.patch, MAPREDUCE-6192.005.patch, 
> MAPREDUCE-6192.006.patch, MAPREDUCE-6192.007.patch, 
> MAPREDUCE-6192.branch-2.007.patch
>
>
> Create a unit test that will automatically compare the fields in the various 
> MapReduce related classes and mapred-default.xml. It should throw an error if 
> a property is missing in either the class or the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-2094) org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements unsafe default behaviour that is different from the documented behaviour.

2015-05-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529382#comment-14529382
 ] 

Hadoop QA commented on MAPREDUCE-2094:
--

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 57s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 3 new or modified test files. |
| {color:green}+1{color} | javac |   7m 40s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 47s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 53s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 16s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | mapreduce tests |   1m 38s | Tests passed in 
hadoop-mapreduce-client-core. |
| | |  38m 46s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730617/MAPREDUCE-2094-2015-05-05-2328.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0100b15 |
| whitespace | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5648/artifact/patchprocess/whitespace.txt
 |
| hadoop-mapreduce-client-core test log | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5648/artifact/patchprocess/testrun_hadoop-mapreduce-client-core.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5648/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5648/console |


This message was automatically generated.

> org.apache.hadoop.mapreduce.lib.input.FileInputFormat: isSplitable implements 
> unsafe default behaviour that is different from the documented behaviour.
> ---
>
> Key: MAPREDUCE-2094
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2094
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task
>Reporter: Niels Basjes
>Assignee: Niels Basjes
> Attachments: MAPREDUCE-2094-2011-05-19.patch, 
> MAPREDUCE-2094-20140727-svn-fixed-spaces.patch, 
> MAPREDUCE-2094-20140727-svn.patch, MAPREDUCE-2094-20140727.patch, 
> MAPREDUCE-2094-2015-05-05-2328.patch, 
> MAPREDUCE-2094-FileInputFormat-docs-v2.patch
>
>
> When implementing a custom derivative of FileInputFormat we ran into the 
> effect that a large Gzipped input file would be processed several times. 
> A near 1GiB file would be processed around 36 times in its entirety. Thus 
> producing garbage results and taking up a lot more CPU time than needed.
> It took a while to figure out and what we found is that the default 
> implementation of the isSplittable method in 
> [org.apache.hadoop.mapreduce.lib.input.FileInputFormat | 
> http://svn.apache.org/viewvc/hadoop/mapreduce/trunk/src/java/org/apache/hadoop/mapreduce/lib/input/FileInputFormat.java?view=markup
>  ] is simply "return true;". 
> This is a very unsafe default and is in contradiction with the JavaDoc of the 
> method which states: "Is the given filename splitable? Usually, true, but if 
> the file is stream compressed, it will not be. " . The actual implementation 
> effectively does "Is the given filename splitable? Always true, even if the 
> file is stream compressed using an unsplittable compression codec. "
> For our situation (where we always have Gzipped input) we took the easy way 
> out and simply implemented an isSplittable in our class that does "return 
> false; "
> Now there are essentially 3 ways I can think of for fixing this (in order of 
> what I would find preferable):
> # Implement something that looks at the used compression of the file (i.e. do 
> migrate the implementation from TextInputFormat to FileInputFormat). This 
> would make the method do what the JavaDoc describes.
> # "Force" developers to think about it and mak

[jira] [Commented] (MAPREDUCE-6353) Divide by zero error in MR AM when calculating available containers

2015-05-05 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529383#comment-14529383
 ] 

Arun Suresh commented on MAPREDUCE-6353:


The patch itself looks good. Thanks [~adhoot]
But I was wondering if we should allow tasks with 0 vcores (Maybe within the 
context of reservation definition, where in 1 or more resource requests can 
have 0 vcores). Will it make sense to validate this at job submission time the 
value of {{mapreduce.map.cpu.vcores}} / {{mapreduce.reduce.cpu.vcores}} as well 
?

> Divide by zero error in MR AM when calculating available containers
> ---
>
> Key: MAPREDUCE-6353
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6353
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mr-am
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: MAPREDUCE-6353.001.patch
>
>
> When running a sleep job with zero CPU vcores i see the following exception
> 2015-04-30 06:41:06,954 ERROR [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN 
> CONTACTING RM. 
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.ResourceCalculatorUtils.computeAvailableContainers(ResourceCalculatorUtils.java:38)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduledRequests.assign(RMContainerAllocator.java:947)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduledRequests.access$200(RMContainerAllocator.java:840)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:247)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$1.run(RMCommunicator.java:282)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6353) Divide by zero error in MR AM when calculating available containers

2015-05-05 Thread Anubhav Dhoot (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529450#comment-14529450
 ] 

Anubhav Dhoot commented on MAPREDUCE-6353:
--

Its a good discussion to have whether we should allow zero vcores. I think its 
ok if jobs do not care about getting any CPU allocated to them and just use 
whatever leftover CPU there exists.
Irrespective I think we should not be crashing in the AM when calculating 
available containers. If we need to enforce it I agree job submission is a 
better way to rejecting those values. 

> Divide by zero error in MR AM when calculating available containers
> ---
>
> Key: MAPREDUCE-6353
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6353
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mr-am
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: MAPREDUCE-6353.001.patch
>
>
> When running a sleep job with zero CPU vcores i see the following exception
> 2015-04-30 06:41:06,954 ERROR [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: ERROR IN 
> CONTACTING RM. 
> java.lang.ArithmeticException: / by zero
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.ResourceCalculatorUtils.computeAvailableContainers(ResourceCalculatorUtils.java:38)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduledRequests.assign(RMContainerAllocator.java:947)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduledRequests.access$200(RMContainerAllocator.java:840)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:247)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$1.run(RMCommunicator.java:282)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6304) Specifying node labels when submitting MR jobs

2015-05-05 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529508#comment-14529508
 ] 

Naganarasimha G R commented on MAPREDUCE-6304:
--

Thanks [~Wangda] for your comments,
+1 for {{mention in description that, by default the node-label-expression for 
job is not set, it will use queue's default-node-label-expression.}}. I am 
getting it tested in cluster setup, will upload the updated patch today.

> Specifying node labels when submitting MR jobs
> --
>
> Key: MAPREDUCE-6304
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6304
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Jian Fang
>Assignee: Naganarasimha G R
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6304.20150410-1.patch, 
> MAPREDUCE-6304.20150411-1.patch, MAPREDUCE-6304.20150501-1.patch
>
>
> Per the discussion on YARN-796, we need a mechanism in MAPREDUCE to specify 
> node labels when submitting MR jobs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6251) JobClient needs additional retries at a higher level to address not-immediately-consistent dfs corner cases

2015-05-05 Thread Craig Welch (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Welch updated MAPREDUCE-6251:
---
Status: Patch Available  (was: Open)

> JobClient needs additional retries at a higher level to address 
> not-immediately-consistent dfs corner cases
> ---
>
> Key: MAPREDUCE-6251
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6251
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver, mrv2
>Affects Versions: 2.6.0
>Reporter: Craig Welch
>Assignee: Craig Welch
> Attachments: MAPREDUCE-6251.0.patch, MAPREDUCE-6251.1.patch, 
> MAPREDUCE-6251.2.patch, MAPREDUCE-6251.3.patch, MAPREDUCE-6251.4.patch
>
>
> The JobClient is used to get job status information for running and completed 
> jobs.  Final state and history for a job is communicated from the application 
> master to the job history server via a distributed file system - where the 
> history is uploaded by the application master to the dfs and then 
> scanned/loaded by the jobhistory server.  While HDFS has strong consistency 
> guarantees not all Hadoop DFS's do.  When used in conjunction with a 
> distributed file system which does not have this guarantee there will be 
> cases where the history server may not see an uploaded file, resulting in the 
> dreaded "no such job" and a null value for the RunningJob in the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6251) JobClient needs additional retries at a higher level to address not-immediately-consistent dfs corner cases

2015-05-05 Thread Craig Welch (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Welch updated MAPREDUCE-6251:
---
Attachment: MAPREDUCE-6251.4.patch

Updated with recommended move to MRJobConfig

> JobClient needs additional retries at a higher level to address 
> not-immediately-consistent dfs corner cases
> ---
>
> Key: MAPREDUCE-6251
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6251
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver, mrv2
>Affects Versions: 2.6.0
>Reporter: Craig Welch
>Assignee: Craig Welch
> Attachments: MAPREDUCE-6251.0.patch, MAPREDUCE-6251.1.patch, 
> MAPREDUCE-6251.2.patch, MAPREDUCE-6251.3.patch, MAPREDUCE-6251.4.patch
>
>
> The JobClient is used to get job status information for running and completed 
> jobs.  Final state and history for a job is communicated from the application 
> master to the job history server via a distributed file system - where the 
> history is uploaded by the application master to the dfs and then 
> scanned/loaded by the jobhistory server.  While HDFS has strong consistency 
> guarantees not all Hadoop DFS's do.  When used in conjunction with a 
> distributed file system which does not have this guarantee there will be 
> cases where the history server may not see an uploaded file, resulting in the 
> dreaded "no such job" and a null value for the RunningJob in the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5819) Binary token merge should be done once in TokenCache#obtainTokensForNamenodesInternal()

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5819:

Labels: BB2015-05-TBR  (was: )

> Binary token merge should be done once in 
> TokenCache#obtainTokensForNamenodesInternal()
> ---
>
> Key: MAPREDUCE-5819
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5819
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: security
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: mapreduce-5819-v1.txt
>
>
> Currently mergeBinaryTokens() is called by every invocation of 
> obtainTokensForNamenodesInternal(FileSystem, Credentials, Configuration) in 
> the loop of obtainTokensForNamenodesInternal(Credentials, Path[], 
> Configuration).
> This can be simplified so that mergeBinaryTokens() is called only once in 
> obtainTokensForNamenodesInternal(Credentials, Path[], Configuration).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-2340) optimize JobInProgress.initTasks()

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-2340:

Labels: BB2015-05-TBR critical-0.22.0  (was: critical-0.22.0)

> optimize JobInProgress.initTasks()
> --
>
> Key: MAPREDUCE-2340
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2340
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: jobtracker
>Affects Versions: 0.20.1, 0.21.0
>Reporter: Kang Xiao
>  Labels: BB2015-05-TBR, critical-0.22.0
> Attachments: MAPREDUCE-2340.patch, MAPREDUCE-2340.patch, 
> MAPREDUCE-2340.r1.diff
>
>
> JobTracker's hostnameToNodeMap cache can speed up JobInProgress.initTasks() 
> and JobInProgress.createCache() significantly. A test for 1 job with 10 
> maps on a 2400 cluster shows nearly 10 and 50 times speed up for initTasks() 
> and createCache(). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5258) Memory Leak while using LocalJobRunner

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5258:

Labels: BB2015-05-TBR patch  (was: patch)

> Memory Leak while using LocalJobRunner
> --
>
> Key: MAPREDUCE-5258
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5258
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Subroto Sanyal
>Assignee: skrho
>  Labels: BB2015-05-TBR, patch
> Fix For: 1.1.3
>
> Attachments: mapreduce-5258 _001.txt, mapreduce-5258.txt
>
>
> Every-time a LocalJobRunner is launched it creates JobTrackerInstrumentation 
> and QueueMetrics.
> While creating this MetricsSystem ; it registers and adds a Callback to 
> ArrayList which keeps on growing as the DefaultMetricsSystem is Singleton. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5605) Memory-centric MapReduce aiming to solve the I/O bottleneck

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5605:

Labels: BB2015-05-TBR  (was: )

> Memory-centric MapReduce aiming to solve the I/O bottleneck
> ---
>
> Key: MAPREDUCE-5605
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5605
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 1.0.1
> Environment: x86-64 Linux/Unix
> 64-bit jdk7 preferred
>Reporter: Ming Chen
>Assignee: Ming Chen
>  Labels: BB2015-05-TBR
> Fix For: 1.0.1
>
> Attachments: MAPREDUCE-5605-v1.patch, TR-mammoth-HUST.pdf, 
> hadoop-core-1.0.1-mammoth-0.9.0.jar
>
>
> Memory is a very important resource to bridge the gap between CPUs and I/O 
> devices. So the idea is to maximize the usage of memory to solve the problem 
> of I/O bottleneck. We developed a multi-threaded task execution engine, which 
> runs in a single JVM on a node. In the execution engine, we have implemented 
> the algorithm of memory scheduling to realize global memory management, based 
> on which we further developed the techniques such as sequential disk 
> accessing, multi-cache and solved the problem of full garbage collection in 
> the JVM. The benchmark results shows that it can get impressive improvement 
> in typical cases. When the a system is relatively short of memory (eg, HPC, 
> small- and medium-size enterprises), the improvement will be even more 
> impressive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5626) TaskLogServlet could not get syslog

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5626:

Labels: BB2015-05-TBR patch  (was: patch)

> TaskLogServlet could not get syslog
> ---
>
> Key: MAPREDUCE-5626
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5626
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: Linux version 2.6.18-238.9.1.el5
> Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
> hadoop-1.2.1
>Reporter: yangjun
>Priority: Minor
>  Labels: BB2015-05-TBR, patch
> Fix For: 1.2.1
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> When multiply tasks use one jvm and generated logs.
> eg.
> ./attempt_201211220735_0001_m_00_0:
> log.index
> ./attempt_201211220735_0001_m_01_0:
> log.index
> ./attempt_201211220735_0001_m_02_0:
> log.index  stderr  stdout  syslog
> get from http://:50060/tasklog?attemptid= 
> attempt_201211220735_0001_m_00_0 
> could get stderr,stdout,but not the others,include syslog.
> see TaskLogServlet.haveTaskLog() method, not check from local && log.index, 
> but check the original path.
> resolve:
> modify TaskLogServlet haveTaskLog method
> private boolean haveTaskLog(TaskAttemptID taskId, boolean isCleanup,  
> TaskLog.LogName type) throws IOException {  
> File f = TaskLog.getTaskLogFile(taskId, isCleanup, type);  
> if (f.exists() && f.canRead()) {  
> return true;  
> } else {  
> File indexFile = TaskLog.getIndexFile(taskId, isCleanup);  
> if (!indexFile.exists()) {  
> return false;  
> }  
>
>
> BufferedReader fis;  
> try {  
> fis = new BufferedReader(new InputStreamReader(  
> SecureIOUtils.openForRead(indexFile,  
> TaskLog.obtainLogDirOwner(taskId;  
> } catch (FileNotFoundException ex) {  
> LOG.warn("Index file for the log of " + taskId  
> + " does not exist.");  
>
>
> // Assume no task reuse is used and files exist on attemptdir 
>  
> StringBuffer input = new StringBuffer();  
> input.append(LogFileDetail.LOCATION  
> + TaskLog.getAttemptDir(taskId, isCleanup) + "\n");  
> for (LogName logName : TaskLog.LOGS_TRACKED_BY_INDEX_FILES) { 
>  
> input.append(logName + ":0 -1\n");  
> }  
> fis = new BufferedReader(new StringReader(input.toString())); 
>  
> }  
>
>
> try {  
> String str = fis.readLine();  
> if (str == null) { // thefile doesn't have anything  
> throw new IOException("Index file for the log of " + 
> taskId  
> + "is empty.");  
> }  
> String loc = 
> str.substring(str.indexOf(LogFileDetail.LOCATION)  
> + LogFileDetail.LOCATION.length());  
> File tf = new File(loc, type.toString());  
> return tf.exists() && tf.canRead();  
>
>
> } finally {  
> if (fis != null)  
> fis.close();  
> }  
> }  
>
>
> }  
> workaround:
> url add filter=SYSLOG could print syslog also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4978) Add a updateJobWithSplit() method for new-api job

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4978:

Labels: BB2015-05-TBR  (was: )

> Add a updateJobWithSplit() method for new-api job
> -
>
> Key: MAPREDUCE-4978
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4978
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 1.1.2
>Reporter: Liyin Liang
>Assignee: Liyin Liang
>  Labels: BB2015-05-TBR
> Attachments: 4978-1.diff
>
>
> HADOOP-1230 adds a method updateJobWithSplit(), which only works for old-api 
> job. It's better to add another method for new-api job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5614) job history file name should escape queue name

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5614:

Labels: BB2015-05-TBR  (was: )

> job history file name should escape queue name
> --
>
> Key: MAPREDUCE-5614
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5614
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Liyin Liang
>Assignee: Liyin Liang
>  Labels: BB2015-05-TBR
> Attachments: mr-5614-2.diff, mr-5614.diff
>
>
> Our cluster's queue name contains hyphen e.g. cug-taobao. Because hyphen is 
> the delimiter of job history file name, JobHistoryServer shows "cug" as the 
> queue name. To fix this problem, we should escape queuename in job history 
> file name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5690) TestLocalMRNotification.testMR occasionally fails

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5690:

Labels: BB2015-05-TBR  (was: )

> TestLocalMRNotification.testMR occasionally fails
> -
>
> Key: MAPREDUCE-5690
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5690
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Liyin Liang
>Assignee: Liyin Liang
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5690.1.diff
>
>
> TestLocalMRNotificationis occasionally failing with the error:
> {code}
> ---
> Test set: org.apache.hadoop.mapred.TestLocalMRNotification
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 24.992 sec 
> <<< FAILURE! - in org.apache.hadoop.mapred.TestLocalMRNotification
> testMR(org.apache.hadoop.mapred.TestLocalMRNotification)  Time elapsed: 
> 24.881 sec  <<< ERROR!
> java.io.IOException: Job cleanup didn't start in 20 seconds
> at 
> org.apache.hadoop.mapred.UtilsForTests.runJobKill(UtilsForTests.java:685)
> at 
> org.apache.hadoop.mapred.NotificationTestCase.testMR(NotificationTestCase.java:178)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:243)
> at junit.framework.TestSuite.run(TestSuite.java:238)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:254)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:149)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5643) DynamicMR: A Dynamic Slot Utilization Optimization Framework for Hadoop MRv1

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5643:

Labels: BB2015-05-TBR performance  (was: performance)

> DynamicMR: A Dynamic Slot Utilization Optimization Framework for Hadoop MRv1
> 
>
> Key: MAPREDUCE-5643
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5643
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: contrib/fair-share
>Affects Versions: 1.2.1
>Reporter: tang shanjiang
>Assignee: tang shanjiang
>  Labels: BB2015-05-TBR, performance
> Attachments: DynamicMR A Dynamic Slot Allocation Optimization 
> Framework for MapReduce Clusters.pdf, DynamicMR-0.1.1-patch, 
> DynamicMR_TCC_SupplementalMaterial.pdf, README
>
>
> Hadoop MRv1 uses the slot-based resource model with the static configuration 
> of map/reduce slots. There is a strict utility constrain that map tasks can 
> only run on map slots and reduce tasks can only use reduce slots. Due to the 
> rigid execution order between map and reduce tasks in a MapReduce 
> environment, slots can be severely under-utilized, which significantly 
> degrades the performance. 
> In contrast to YARN that gives up the slot-based resource model and propose a 
> container-based model to maximize the resource utilization via unawareness of 
> the types of map/reduce tasks, we keep the slot-based model and propose a 
> dynamic slot utilization optimization system called DynamicMR to improve the 
> performance of Hadoop by maximizing the slots utilization as well as slot 
> utilization efficiency while guaranteeing the fairness across pools. It 
> consists of three types of scheduling components, namely, Dynamic Hadoop Fair 
> Scheduler (DHFS), Dynamic Speculative Task Scheduler (DSTS), and Data 
> Locality Maximization Scheduler (DLMS).
> Our tests show that DynamicMR outperforms YARN for MapReduce workloads with 
> multiple jobs, especially when the number of jobs is large. The explanation 
> is that, given a certain number of resources, it is obvious that the 
> performance for the case with a ratio control of concurrently running map and 
> reduce tasks is better than without control. Because without control, it 
> easily occurs that there are too many reduce tasks running, causing the 
> network to be a bottleneck seriously. For YARN, both map and reduce tasks can 
> run on any idle container. There is no control mechanism for the ratio of 
> resource allocation between map and reduce tasks. It means that when there 
> are pending reduce tasks, the idle container will be most likely possessed by 
> them. In contrast, DynamicMR follows the traditional slot-based model. In 
> contrast to the ’hard’ constrain of slot allocation that map slots have to be 
> allocated to map tasks and reduce tasks should be dispatched to reduce tasks, 
> DynamicMR obeys a ’soft’ constrain of slot allocation to allow that map slot 
> can be allocated to reduce task and vice versa. But whenever there are 
> pending map tasks, the map slot should be given to map tasks first, and the 
> rule is similar for reduce tasks. It means that, the traditional way of 
> static map/reduce slot configuration for the ratio control of running 
> map/reduce tasks still works for DynamicMR. In comparison to YARN which 
> maximizes the resource utilization only, DynamicMR can maximize the slot 
> resource utilization and meanwhile dynamically control the ratio of running 
> map/reduce tasks via map/reduce slot configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6224) resolve the hosts in DNSToSwitchMapping before inter tracker server start to avoid IPC timeout in Task Tracker heartbeat

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6224:

Labels: BB2015-05-TBR  (was: )

> resolve the hosts in DNSToSwitchMapping before inter tracker server start to 
> avoid IPC timeout in Task Tracker heartbeat
> 
>
> Key: MAPREDUCE-6224
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6224
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv1
>Reporter: zhihai xu
>Assignee: zhihai xu
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-6224.branch-1.000.patch
>
>
> Resolve the hosts to fill up the cache in CachedDNSToSwitchMapping before 
> inter tracker server start to avoid IPC timeout in Task Tracker heartbeat.
> We saw IPC timeout happen in Task Tracker heartbeat for a large MR1 cluster 
> which use topology script(ShellCommandExecutor) to resolve the Network 
> Topology for Task Tracker host in ScriptBasedMapping.
> The reason is 
> Right after inter tracker server start in Job Tracker, Job Tracker receive a 
> lots HeartBeat from the Task Tracker. 
> heartbeat function call resolveAndAddToTopology to resolve the Network 
> Topology for Task Tracker host in ScriptBasedMapping which implement 
> CachedDNSToSwitchMapping.
> ScriptBasedMapping#resolve will check whether the host is in the cache,
> If the host is not in the cache, it will run topology script to get the 
> host's Network Topology using ShellCommandExecutor. Normally running topology 
> script is time consuming, which may cause the IPC time if too many heartbeat 
> happened at the same time for a large MR1 cluster.
> The solution is to resolve the Network Topology for all hosts in the hosts 
> list from HostsFileReader before receive any heartbeat from Task Tracker, so 
> the cache in ScriptBasedMapping will be filled up, and when heartbeat call 
> resolveAndAddToTopology, it will get the result from the cache instead of 
> running topology script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4901) JobHistoryEventHandler errors should be fatal

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4901:

Labels: BB2015-05-TBR  (was: )

> JobHistoryEventHandler errors should be fatal
> -
>
> Key: MAPREDUCE-4901
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4901
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 0.23.0, 2.0.0-alpha
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Labels: BB2015-05-TBR
> Attachments: MR-4901-trunk.txt
>
>
> To be able to truly fix issues like MAPREDUCE-4819 and MAPREDUCE-4832, we 
> need a 2 phase commit where a subsequent AM can be sure that at a specific 
> point in time it knows exactly if any tasks/jobs are committing.  The job 
> history log is already used for similar functionality so we would like to 
> reuse this, but we need to be sure that errors while writing out to the job 
> history log are now fatal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5663) Add an interface to Input/Ouput Formats to obtain delegation tokens

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5663:

Labels: BB2015-05-TBR  (was: )

> Add an interface to Input/Ouput Formats to obtain delegation tokens
> ---
>
> Key: MAPREDUCE-5663
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5663
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Michael Weng
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5663.4.txt, MAPREDUCE-5663.5.txt, 
> MAPREDUCE-5663.6.txt, MAPREDUCE-5663.patch.txt, MAPREDUCE-5663.patch.txt2, 
> MAPREDUCE-5663.patch.txt3
>
>
> Currently, delegation tokens are obtained as part of the getSplits / 
> checkOutputSpecs calls to the InputFormat / OutputFormat respectively.
> This works as long as the splits are generated on a node with kerberos 
> credentials. For split generation elsewhere (AM for example), an explicit 
> interface is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-3047) FileOutputCommitter throws wrong type of exception when calling abortTask() to handle a directory without permission

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-3047:

Labels: BB2015-05-TBR  (was: )

> FileOutputCommitter throws wrong type of exception when calling abortTask() 
> to handle a directory without permission
> 
>
> Key: MAPREDUCE-3047
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3047
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: JiangKai
>Priority: Trivial
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-3047-1.patch, MAPREDUCE-3047-2.patch, 
> MAPREDUCE-3047.patch
>
>
> When FileOutputCommitter calls abortTask() to create a temp directory, if the 
> user has no permission to access the directory, or a file with the same name 
> has existed, of course it will fail, however the system will output the error 
> information into the log file instead of throwing an exception.As a result, 
> when the temp directory is needed later, since the temp directory hasn't been 
> created yet, system will throw an exception to tell user that the temp 
> directory doesn't exist.In my opinion, the exception is not exact and the 
> error infomation will confuse users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6232) Task state is running when all task attempts fail

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6232:

Labels: BB2015-05-TBR  (was: )

> Task state is running when all task attempts fail
> -
>
> Key: MAPREDUCE-6232
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6232
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task
>Affects Versions: 2.6.0
>Reporter: Yang Hao
>Assignee: Yang Hao
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-6232.patch, MAPREDUCE-6232.v2.patch, 
> TaskImpl.new.png, TaskImpl.normal.png, result.pdf
>
>
> When task attempts fails, the task's state is still  running. A clever way is 
> to check the task attempts's state, if none of the attempts is running, then 
> the task state should not be running



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6258) add support to back up JHS files from application master

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6258:

Labels: BB2015-05-TBR  (was: )

> add support to back up JHS files from application master
> 
>
> Key: MAPREDUCE-6258
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6258
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: applicationmaster
>Affects Versions: 2.4.1
>Reporter: Jian Fang
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-6258.patch
>
>
> In hadoop two, job history files are stored on HDFS with a default retention 
> period of one week. In a cloud environment, these HDFS files are actually 
> stored on the disks of ephemeral instances that could go away once the 
> instances are terminated. Users may want to back up the job history files for 
> issue investigation and performance analysis before and after the cluster is 
> terminated. 
> A centralized backup mechanism could have a scalability issue for big and 
> busy Hadoop clusters where there are probably tens of thousands of jobs every 
> day. As a result, it is preferred to have a distributed way to back up the 
> job history files in this case. To achieve this goal, we could add a new 
> feature to back up the job history files in Application master. More 
> specifically, we could copy the job history files to a backup path when they 
> are moved from the temporary staging directory to the intermediate_done path 
> in application master. Since application masters could run on any slave nodes 
> on a Hadoop cluster, we could achieve a better scalability by backing up the 
> job history files in a distributed fashion.
> Please be aware, the backup path should be managed by the Hadoop users based 
> on their needs. For example, some Hadoop users may copy the job history files 
> to a cloud storage directly and keep them there forever. While some other 
> users may want to store the job history files on local disks and clean them 
> up from time to time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6208) There should be an input format for MapFiles which can be configured so that only a fraction of the input data is used for the MR process

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6208:

Labels: BB2015-05-TBR inputformat mapfile  (was: inputformat mapfile)

> There should be an input format for MapFiles which can be configured so that 
> only a fraction of the input data is used for the MR process
> -
>
> Key: MAPREDUCE-6208
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6208
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Jens Rabe
>Assignee: Jens Rabe
>  Labels: BB2015-05-TBR, inputformat, mapfile
> Attachments: MAPREDUCE-6208.001.patch, MAPREDUCE-6208.002.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In some cases there are large amounts of data organized in MapFiles, e.g., 
> from previous MapReduce tasks, and only a fraction of the data is to be 
> processed in a MR task. The current approach, as I understand, is to 
> re-organize the data in a suitable partition using folders on HDFS, and only 
> use relevant folders as input paths, and maybe doing some additional 
> filtering in the Map task. However, sometimes the input data cannot be easily 
> partitioned that way. For example, when processing large amounts of measured 
> data where additional data on a time period already in HDFS arrives later.
> There should be an input format that accepts folders with MapFiles, and there 
> should be an option to specify the input key range so that only fitting 
> InputSplits are generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4961) Map reduce running local should also go through ShuffleConsumerPlugin for enabling different MergeManager implementations

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4961:

Labels: BB2015-05-TBR  (was: )

> Map reduce running local should also go through ShuffleConsumerPlugin for 
> enabling different MergeManager implementations
> -
>
> Key: MAPREDUCE-4961
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4961
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Jerry Chen
>Assignee: Jerry Chen
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-4961.patch, MAPREDUCE-4961.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> MAPREDUCE-4049 provide the ability for pluggable Shuffle and MAPREDUCE-4080 
> extends Shuffle to be able to provide different MergeManager implementations. 
> While using these pluggable features, I find that when a map reduce is 
> running locally, a RawKeyValueIterator was returned directly from a static 
> call of Merge.merge, which break the assumption that the Shuffle may provide 
> different merge methods although there is no copy phase for this situation.
> The use case is when I am implementating a hash-based MergeManager, we don't 
> need sort in map side, while when running the map reduce locally, the 
> hash-based MergeManager will have no chance to be used as it goes directly to 
> Merger.merge. This makes the pluggable Shuffle and MergeManager incomplete.
> So we need to move the code calling Merger.merge from Reduce Task to 
> ShuffleConsumerPlugin implementation, so that the Suffle implementation can 
> decide how to do the merge and return corresponding iterator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6205) Update the value of the new version properties of the deprecated property "mapred.child.java.opts"

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6205:

Labels: BB2015-05-TBR  (was: )

> Update the value of the new version properties of the deprecated property 
> "mapred.child.java.opts"
> --
>
> Key: MAPREDUCE-6205
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6205
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: sam liu
>Assignee: sam liu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-6205.003.patch, MAPREDUCE-6205.patch, 
> MAPREDUCE-6205.patch
>
>
> In current hadoop code, the old property "mapred.child.java.opts" is 
> deprecated and its new versions are MRJobConfig.MAP_JAVA_OPTS and 
> MRJobConfig.REDUCE_JAVA_OPTS. However, when user set a value to the 
> deprecated property "mapred.child.java.opts", hadoop won't automatically 
> update its new versions properties 
> MRJobConfig.MAP_JAVA_OPTS("mapreduce.map.java.opts") and 
> MRJobConfig.REDUCE_JAVA_OPTS("mapreduce.reduce.java.opts"). As hadoop will 
> update the new version properties for many other deprecated properties, we 
> also should support such feature on the old property 
> "mapred.child.java.opts", otherwise it might bring some imcompatible issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5915) Pipes ping thread should sleep in intervals to allow for isDone() to be checked

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5915:

Labels: BB2015-05-TBR  (was: )

> Pipes ping thread should sleep in intervals to allow for isDone() to be 
> checked
> ---
>
> Key: MAPREDUCE-5915
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5915
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: pipes
>Reporter: Joe Mudd
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5915.patch
>
>
> The ping() thread sleeps for 5 seconds at a time causing up to a 5 second 
> delay in testing if the job is finished.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6155) MapFiles are not always correctly detected by SequenceFileInputFormat

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6155:

Labels: BB2015-05-TBR  (was: )

> MapFiles are not always correctly detected by SequenceFileInputFormat
> -
>
> Key: MAPREDUCE-6155
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6155
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Jens Rabe
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-6155.001.patch, MAPREDUCE-6155.002.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> MapFiles are not correctly detected by SequenceFileInputFormat.
> This is because the listStatus method only detects a MapFile correctly if the 
> path it checks is a directory - it then replaces it by the path of the data 
> file.
> This is likely to fail if the data file does not exist, i.e., if the input 
> path is a directory, but does not belong to a MapFile, or if recursion is 
> turned on and the input format comes across a file (not a directory) which is 
> indeed part of a MapFile.
> The listStatus method should be changed to detect these cases correctly:
> * if the current candidate is a file and its name is "index" or "data", check 
> if its corresponding other file exists, and if the key types of both files 
> match and if the value type of the index file is LongWritable
> * If the current candidate is a directory, it is only a MapFile if (and only 
> if) an index and a data file exist, they are both SequenceFiles and their key 
> types match (and the index value type is LongWritable)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5916) The authenticate response is not sent when password is empty (LocalJobRunner)

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5916:

Labels: BB2015-05-TBR  (was: )

> The authenticate response is not sent when password is empty (LocalJobRunner)
> -
>
> Key: MAPREDUCE-5916
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5916
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: pipes
>Reporter: Joe Mudd
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5916.patch
>
>
> When running in a mode where there are no credentials associated with the 
> pipes submission and the password is empty, the C++ verifyDigestAndRespond() 
> does not respond to the Java side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-3385) Add warning message for the overflow in reduce() of org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-3385:

Labels: BB2015-05-TBR  (was: )

> Add warning message for the overflow in reduce() of 
> org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer
> 
>
> Key: MAPREDUCE-3385
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3385
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: JiangKai
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-3385.patch
>
>
> When we call the function reduce() of IntSumReducer,the result may overflow.
> We should send a warning message to users if overflow occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4070) JobHistoryServer creates /tmp directory with restrictive permissions if the directory doesn't already exist.

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4070:

Labels: BB2015-05-TBR  (was: )

> JobHistoryServer creates /tmp directory with restrictive permissions if the 
> directory doesn't already exist.
> 
>
> Key: MAPREDUCE-4070
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4070
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 0.23.1
>Reporter: Ahmed Radwan
>Assignee: Ahmed Radwan
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-4070.patch
>
>
> Starting up the MapReduce JobhHistoryServer service after a clean install 
> appears to automatically create the /tmp directory on HDFS. However, it is 
> created with 750 permission.
> Attempting to run MR jobs by other users results in the following permissions 
> exception:
> {code}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=cloudera, access=EXECUTE, inode="/tmp":yarn:supergroup:drwxr-x---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-1125) SerialUtils.cc: deserializeFloat is out of sync with SerialUtils.hh

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-1125:

Labels: BB2015-05-TBR  (was: )

> SerialUtils.cc: deserializeFloat is out of sync with SerialUtils.hh
> ---
>
> Key: MAPREDUCE-1125
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1125
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: pipes
>Affects Versions: 0.21.0
>Reporter: Simone Leo
>Assignee: Simone Leo
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-1125-2.patch, MAPREDUCE-1125-3.patch
>
>
> {noformat}
> *** SerialUtils.hh ***
>   float deserializeFloat(InStream& stream);
> *** SerialUtils.cc ***
>   void deserializeFloat(float& t, InStream& stream)
>   {
> char buf[sizeof(float)];
> stream.read(buf, sizeof(float));
> XDR xdrs;
> xdrmem_create(&xdrs, buf, sizeof(float), XDR_DECODE);
> xdr_float(&xdrs, &t);
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-2638) Create a simple stress test for the fair scheduler

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-2638:

Labels: BB2015-05-TBR  (was: )

> Create a simple stress test for the fair scheduler
> --
>
> Key: MAPREDUCE-2638
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2638
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>  Components: contrib/fair-share
>Reporter: Tom White
>Assignee: Tom White
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-2638.patch, MAPREDUCE-2638.patch
>
>
> This would be a test that runs against a cluster, typically with settings 
> that allow preemption to be exercised.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5801) Uber mode's log message is missing a vcore reason

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5801:

Labels: BB2015-05-TBR easyfix  (was: easyfix)

> Uber mode's log message is missing a vcore reason
> -
>
> Key: MAPREDUCE-5801
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5801
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Steven Wong
>Assignee: Steven Wong
>Priority: Minor
>  Labels: BB2015-05-TBR, easyfix
> Attachments: MAPREDUCE-5801.patch
>
>
> If a job cannot be run in uber mode because of insufficient vcores, the 
> resulting log message has an empty reason.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5018) Support raw binary data with Hadoop streaming

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5018:

Labels: BB2015-05-TBR  (was: )

> Support raw binary data with Hadoop streaming
> -
>
> Key: MAPREDUCE-5018
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5018
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: contrib/streaming
>Affects Versions: 1.1.2
>Reporter: Jay Hacker
>Assignee: Steven Willis
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5018-branch-1.1.patch, MAPREDUCE-5018.patch, 
> MAPREDUCE-5018.patch, justbytes.jar, mapstream
>
>
> People often have a need to run older programs over many files, and turn to 
> Hadoop streaming as a reliable, performant batch system.  There are good 
> reasons for this:
> 1. Hadoop is convenient: they may already be using it for mapreduce jobs, and 
> it is easy to spin up a cluster in the cloud.
> 2. It is reliable: HDFS replicates data and the scheduler retries failed jobs.
> 3. It is reasonably performant: it moves the code to the data, maintaining 
> locality, and scales with the number of nodes.
> Historically Hadoop is of course oriented toward processing key/value pairs, 
> and so needs to interpret the data passing through it.  Unfortunately, this 
> makes it difficult to use Hadoop streaming with programs that don't deal in 
> key/value pairs, or with binary data in general.  For example, something as 
> simple as running md5sum to verify the integrity of files will not give the 
> correct result, due to Hadoop's interpretation of the data.  
> There have been several attempts at binary serialization schemes for Hadoop 
> streaming, such as TypedBytes (HADOOP-1722); however, these are still aimed 
> at efficiently encoding key/value pairs, and not passing data through 
> unmodified.  Even the "RawBytes" serialization scheme adds length fields to 
> the data, rendering it not-so-raw.
> I often have a need to run a Unix filter on files stored in HDFS; currently, 
> the only way I can do this on the raw data is to copy the data out and run 
> the filter on one machine, which is inconvenient, slow, and unreliable.  It 
> would be very convenient to run the filter as a map-only job, allowing me to 
> build on existing (well-tested!) building blocks in the Unix tradition 
> instead of reimplementing them as mapreduce programs.
> However, most existing tools don't know about file splits, and so want to 
> process whole files; and of course many expect raw binary input and output.  
> The solution is to run a map-only job with an InputFormat and OutputFormat 
> that just pass raw bytes and don't split.  It turns out to be a little more 
> complicated with streaming; I have attached a patch with the simplest 
> solution I could come up with.  I call the format "JustBytes" (as "RawBytes" 
> was already taken), and it should be usable with most recent versions of 
> Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4071) NPE while executing MRAppMaster shutdown hook

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4071:

Labels: BB2015-05-TBR  (was: )

> NPE while executing MRAppMaster shutdown hook
> -
>
> Key: MAPREDUCE-4071
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4071
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mr-am, mrv2
>Affects Versions: 0.23.3, 2.0.0-alpha
>Reporter: Bhallamudi Venkata Siva Kamesh
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-4071-1.patch, MAPREDUCE-4071-2.patch, 
> MAPREDUCE-4071-2.patch, MAPREDUCE-4071.patch
>
>
> While running the shutdown hook of MRAppMaster, hit NPE
> {noformat}
> Exception in thread "Thread-1" java.lang.NullPointerException
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.setSignalled(MRAppMaster.java:668)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$MRAppMasterShutdownHook.run(MRAppMaster.java:1004)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-3383) Duplicate job.getOutputValueGroupingComparator() in ReduceTask

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-3383:

Labels: BB2015-05-TBR  (was: )

> Duplicate job.getOutputValueGroupingComparator() in ReduceTask
> --
>
> Key: MAPREDUCE-3383
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3383
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 0.23.1
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-3383.patch
>
>
> This is probably just a small error by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4710) Add peak memory usage counter for each task

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4710:

Labels: BB2015-05-TBR patch  (was: patch)

> Add peak memory usage counter for each task
> ---
>
> Key: MAPREDUCE-4710
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4710
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: task
>Affects Versions: 1.0.2
>Reporter: Cindy Li
>Assignee: Cindy Li
>Priority: Minor
>  Labels: BB2015-05-TBR, patch
> Attachments: MAPREDUCE-4710-trunk.patch, mapreduce-4710-v1.0.2.patch, 
> mapreduce-4710.patch, mapreduce4710-v3.patch, mapreduce4710-v6.patch, 
> mapreduce4710.patch
>
>
> Each task has counters PHYSICAL_MEMORY_BYTES and VIRTUAL_MEMORY_BYTES, which 
> are snapshots of memory usage of that task. They are not sufficient for users 
> to understand peak memory usage by that task, e.g. in order to diagnose task 
> failures, tune job parameters or change application design. This new feature 
> will add two more counters for each task: PHYSICAL_MEMORY_BYTES_MAX and 
> VIRTUAL_MEMORY_BYTES_MAX. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-3384) Add warning message for org.apache.hadoop.mapreduce.lib.reduce.LongSumReducer

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-3384:

Labels: BB2015-05-TBR  (was: )

> Add warning message for org.apache.hadoop.mapreduce.lib.reduce.LongSumReducer
> -
>
> Key: MAPREDUCE-3384
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3384
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: JiangKai
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-3384.patch
>
>
> When we call the function reduce() of LongSumReducer,the result may overflow.
> We should send a warning message to users if overflow occurs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4911) Add node-level aggregation flag feature(setNodeLevelAggregation(boolean)) to JobConf

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4911:

Labels: BB2015-05-TBR  (was: )

> Add node-level aggregation flag feature(setNodeLevelAggregation(boolean)) to 
> JobConf
> 
>
> Key: MAPREDUCE-4911
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4911
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: client
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-4911.2.patch, MAPREDUCE-4911.3.patch, 
> MAPREDUCE-4911.patch
>
>
> This JIRA adds node-level aggregation flag 
> feature(setLocalAggregation(boolean)) to JobConf.
> This task is subtask of MAPREDUCE-4502.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5728) Check NPE for serializer/deserializer in MapTask

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5728:

Labels: BB2015-05-TBR  (was: )

> Check NPE for serializer/deserializer in MapTask
> 
>
> Key: MAPREDUCE-5728
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5728
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Affects Versions: 2.2.0
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5728-trunk.patch
>
>
> Currently we will get NPE if the serializer/deserializer is not configured 
> correctly.
> {code}
> 14/01/14 11:52:35 INFO mapred.JobClient: Task Id : 
> attempt_201401072154_0027_m_02_2, Status : FAILED
> java.lang.NullPointerException
> at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.(MapTask.java:944)
> at 
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.(MapTask.java:672)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:740)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:368)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:362)
> at javax.security.auth.Subject.doAs(Subject.java:573)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1502)
> at org.apache.hadoop.mapred.Child.main(Child.java:249)
> {code}
> serializationFactory.getSerializer and serializationFactory.getDeserializer 
> returns NULL in this case.
> Let's check NPE for serializer/deserializer in MapTask so that we don't get 
> meaningless NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5269) Preemption of Reducer (and Shuffle) via checkpointing

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5269:

Labels: BB2015-05-TBR  (was: )

> Preemption of Reducer (and Shuffle) via checkpointing
> -
>
> Key: MAPREDUCE-5269
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5269
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5269.2.patch, MAPREDUCE-5269.3.patch, 
> MAPREDUCE-5269.4.patch, MAPREDUCE-5269.5.patch, MAPREDUCE-5269.6.patch, 
> MAPREDUCE-5269.7.patch, MAPREDUCE-5269.patch
>
>
> This patch tracks the changes in the task runtime (shuffle, reducer context, 
> etc.) that are required to implement checkpoint-based preemption of reducer 
> tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5854) Move the search box in UI from the right side to the left side

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5854:

Labels: BB2015-05-TBR  (was: )

> Move the search box in UI from the right side to the left side
> --
>
> Key: MAPREDUCE-5854
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5854
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.23.9
>Reporter: Jinhui Liu
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5854.patch, MAPREDUCE-5854.patch
>
>
> In the UI for resoure manager, job history, and job configuration (this might 
> not be a complete list), there is a search box at the top-right corner of the 
> listed content. This search box is frequently used but it is usually not 
> visible due to right-alignment. Extra scroll is needed to make it visable and 
> it is not convenient. It would be good to move it to the left-side, next to 
> the "Show ... Entries" drop-down box.
> In the same spirit, the "First|Preious|...|Next|Last" at the bottom-right 
> corner of the listed content can also be moved to the left side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-3202) Integrating Hadoop Vaidya with Job History UI in Hadoop 2.0

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-3202:

Labels: BB2015-05-TBR  (was: )

> Integrating Hadoop Vaidya with Job History UI in Hadoop 2.0 
> 
>
> Key: MAPREDUCE-3202
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3202
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: jobhistoryserver
>Affects Versions: 2.0.0-alpha
>Reporter: vitthal (Suhas) Gogate
>Assignee: vitthal (Suhas) Gogate
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-3202.patch, MAPREDUCE-3202.patch
>
>
> Hadoop Vaidya provides a detailed analysis of the M/R job in terms of various 
> execution inefficiencies and the associated remedies that user can easily 
> understand and fix. This Jira patch integrates it with Job History UI under 
> Hadoop 2.0 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4594) Add init/shutdown methods to mapreduce Partitioner

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4594:

Labels: BB2015-05-TBR  (was: )

> Add init/shutdown methods to mapreduce Partitioner
> --
>
> Key: MAPREDUCE-4594
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4594
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: client
>Reporter: Radim Kolar
>Assignee: Radim Kolar
>  Labels: BB2015-05-TBR
> Attachments: partitioner1.txt, partitioner2.txt, partitioner2.txt, 
> partitioner3.txt, partitioner4.txt, partitioner5.txt, partitioner6.txt, 
> partitioner6.txt, partitioner7.txt, partitioner8.txt, partitioner9.txt
>
>
> The Partitioner supports only the Configurable API, which can be used for 
> basic init in setConf(). Problem is that there is no shutdown function.
> I propose to use standard setup() cleanup() functions like in mapper / 
> reducer.
> Use case is that I need to start and stop spring context and datagrid client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5860) Hadoop pipes Combiner is closed before all of its reduce calls

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5860:

Labels: BB2015-05-TBR  (was: )

> Hadoop pipes Combiner is closed before all of its reduce calls
> --
>
> Key: MAPREDUCE-5860
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5860
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: pipes
>Affects Versions: 0.23.0
> Environment: 0.23.0 on 64 bit linux
>Reporter: Joe Mudd
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5860.patch
>
>
> When a Combiner is specified to runTask() its reduce() method may be called 
> after its close() method has been called due to how the Combiner's containing 
> object, CombineRunner, is closed after the TaskContextImpl's reducer member 
> is closed (see TaskContextImpl::closeAll()).
> I believe the fix is to delegate the Combiner's ownership to CombineRunner, 
> making it responsible for calling the Combiner's close() method and deleting 
> the Combiner instance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4443) MR AM and job history server should be resilient to jobs that exceed counter limits

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4443:

Labels: BB2015-05-TBR usability  (was: usability)

> MR AM and job history server should be resilient to jobs that exceed counter 
> limits 
> 
>
> Key: MAPREDUCE-4443
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4443
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha
>Reporter: Rahul Jain
>Assignee: Mayank Bansal
>  Labels: BB2015-05-TBR, usability
> Attachments: MAPREDUCE-4443-trunk-1.patch, 
> MAPREDUCE-4443-trunk-2.patch, MAPREDUCE-4443-trunk-3.patch, 
> MAPREDUCE-4443-trunk-draft.patch, am_failed_counter_limits.txt
>
>
> We saw this problem migrating applications to MapReduceV2:
> Our applications use hadoop counters extensively (1000+ counters for certain 
> jobs). While this may not be one of recommended best practices in hadoop, the 
> real issue here is reliability of the framework when applications exceed 
> counter limits.
> The hadoop servers (yarn, history server) were originally brought up with 
> mapreduce.job.counters.max=1000 under core-site.xml
> We then ran map-reduce job under an application using its own job specific 
> overrides, with  mapreduce.job.counters.max=1
> All the tasks for the job finished successfully; however the overall job 
> still failed due to AM encountering exceptions as:
> {code}
> 2012-07-12 17:31:43,485 INFO [AsyncDispatcher event handler] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks
> : 712012-07-12 17:31:43,502 FATAL [AsyncDispatcher event handler] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher threa
> dorg.apache.hadoop.mapreduce.counters.LimitExceededException: Too many 
> counters: 1001 max=1000
> at 
> org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:58) 
>at org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:65)
> at 
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounter(AbstractCounterGroup.java:77)
> at 
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.addCounterImpl(AbstractCounterGroup.java:94)
> at 
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.findCounter(AbstractCounterGroup.java:105)
> at 
> org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.incrAllCounters(AbstractCounterGroup.java:202)
> at 
> org.apache.hadoop.mapreduce.counters.AbstractCounters.incrAllCounters(AbstractCounters.java:337)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.constructFinalFullcounters(JobImpl.java:1212)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.mayBeConstructFinalFullCounters(JobImpl.java:1198)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.createJobFinishedEvent(JobImpl.java:1179)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.logJobHistoryFinishedEvent(JobImpl.java:711)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.checkJobCompleteSuccess(JobImpl.java:737)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$TaskCompletedTransition.checkJobForCompletion(JobImpl.java:1360)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$TaskCompletedTransition.transition(JobImpl.java:1340)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$TaskCompletedTransition.transition(JobImpl.java:1323)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:380)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:666)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:113)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:890)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:886)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:125)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:74)   
>  at java.lang.Thread.run(Thread.java:662)
> 2012-07-12 17:31:43,502 INFO [AsyncDispatcher event handler] 
> org.apache.hadoop.yarn.event.AsyncDispatcher:

[jira] [Updated] (MAPREDUCE-6096) SummarizedJob class NPEs with some jhist files

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6096:

Labels: BB2015-05-TBR easyfix patch  (was: easyfix patch)

> SummarizedJob class NPEs with some jhist files
> --
>
> Key: MAPREDUCE-6096
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6096
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: zhangyubiao
>  Labels: BB2015-05-TBR, easyfix, patch
> Attachments: MAPREDUCE-6096-v2.patch, MAPREDUCE-6096-v3.patch, 
> MAPREDUCE-6096-v4.patch, MAPREDUCE-6096-v5.patch, MAPREDUCE-6096-v6.patch, 
> MAPREDUCE-6096-v7.patch, MAPREDUCE-6096-v8.patch, MAPREDUCE-6096.patch, 
> job_1410427642147_0124-1411726671220-hadp-word+count-1411726696863-1-1-SUCCEEDED-default.jhist
>
>
> When I Parse  the JobHistory in the HistoryFile,I use the Hadoop System's  
> map-reduce-client-core project 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser  class and 
> HistoryViewer$SummarizedJob to Parse the JobHistoryFile(Just Like 
> job_1408862281971_489761-1410883171851_XXX.jhist)  
> and it throw an Exception Just Like 
> Exception in thread "pool-1-thread-1" java.lang.NullPointerException
>   at 
> org.apache.hadoop.mapreduce.jobhistory.HistoryViewer$SummarizedJob.(HistoryViewer.java:626)
>   at 
> com.jd.hadoop.log.parse.ParseLogService.getJobDetail(ParseLogService.java:70)
> After I'm see the SummarizedJob class I  find that attempt.getTaskStatus() is 
> NULL , So I change the order of 
> attempt.getTaskStatus().equals (TaskStatus.State.FAILED.toString())  to 
> TaskStatus.State.FAILED.toString().equals(attempt.getTaskStatus()) 
> and it works well .
> So I wonder If we can change all  attempt.getTaskStatus()  after 
> TaskStatus.State.XXX.toString() ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-1362) Pipes should be ported to the new mapreduce API

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-1362:

Labels: BB2015-05-TBR  (was: )

> Pipes should be ported to the new mapreduce API
> ---
>
> Key: MAPREDUCE-1362
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1362
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: pipes
>Reporter: Bassam Tabbara
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-1362-trunk.patch, MAPREDUCE-1362.patch, 
> MAPREDUCE-1362.patch
>
>
> Pipes is still currently using the old mapred API. This prevents us from 
> using pipes with HBase's TableInputFormat, HRegionPartitioner, etc. 
> Here is a rough proposal for how to accomplish this:
> * Add a new package org.apache.hadoop.mapreduce.pipes that uses the new 
> mapred API.
> * the new pipes package will run side by side with the old one. old one 
> should get deprecated at some point.
> * the wire protocol used between PipesMapper and PipesReducer and C++ 
> programs must not change.
> * bin/hadoop should support both pipes (old api) and pipes2 (new api)
> Does this sound reasonable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-3914) Mismatched free() / delete / delete [] in HadoopPipes

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-3914:

Labels: BB2015-05-TBR  (was: )

> Mismatched free() / delete / delete [] in HadoopPipes
> -
>
> Key: MAPREDUCE-3914
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3914
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: pipes
>Affects Versions: 0.20.205.0, 0.23.0, 1.0.0
> Environment: Based upon map reduce pipes task executed on Ubuntu 11.10
>Reporter: Charles Earl
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-3914-branch-0.23.patch, 
> MAPREDUCE-3914-branch-1.0.patch, MAPREDUCE-3914.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> When running valgrind on a simple MapReduce pipes job, valgrind identifies a 
> mismatched new / delete:
> ==20394== Mismatched free() / delete / delete []
> ==20394==at 0x4C27FF2: operator delete(void*) (vg_replace_malloc.c:387)
> ==20394==by 0x4328A5: HadoopPipes::runTask(HadoopPipes::Factory const&) 
> (HadoopPipes.cc:1171)
> ==20394==by 0x424C33: main (ProcessRow.cpp:118)
> ==20394==  Address 0x9c5b540 is 0 bytes inside a block of size 131,072 alloc'd
> ==20394==at 0x4C2864B: operator new[](unsigned long) 
> (vg_replace_malloc.c:305)
> ==20394==by 0x431E5D: HadoopPipes::runTask(HadoopPipes::Factory const&) 
> (HadoopPipes.cc:1121)
> ==20394==by 0x424C33: main (ProcessRow.cpp:118)
> ==20394== 
> ==20394== Mismatched free() / delete / delete []
> ==20394==at 0x4C27FF2: operator delete(void*) (vg_replace_malloc.c:387)
> ==20394==by 0x4328AF: HadoopPipes::runTask(HadoopPipes::Factory const&) 
> (HadoopPipes.cc:1172)
> ==20394==by 0x424C33: main (ProcessRow.cpp:118)
> ==20394==  Address 0x9c7b580 is 0 bytes inside a block of size 131,072 alloc'd
> ==20394==at 0x4C2864B: operator new[](unsigned long) 
> (vg_replace_malloc.c:305)
> ==20394==by 0x431E6A: HadoopPipes::runTask(HadoopPipes::Factory const&) 
> (HadoopPipes.cc:1122)
> ==20394==by 0x424C33: main (ProcessRow.cpp:118)
> The new [] calls in Lines 1121 and 1122 of HadoopPipes.cc:
> bufin = new char[bufsize];
> bufout = new char[bufsize];
> should have matching delete [] calls but are instead bracketed my delete on 
> lines 1171 and 1172:
>   delete bufin;
>   delete bufout;
> So these should be replaced by delete[]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5491) DFSIO do not initialize write buffer correctly

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5491:

Labels: BB2015-05-TBR  (was: )

> DFSIO do not initialize write buffer correctly
> --
>
> Key: MAPREDUCE-5491
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5491
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: benchmarks, test
>Reporter: Raymond Liu
>Assignee: Raymond Liu
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5491-v2.patch, MAPREDUCE-5491.patch
>
>
> In DFSIO test, the IOMapperBase will set bufferSize in configure method, 
> while writeMapper, appendMapper etc use bufferSize to initialize buffer in 
> the constructor. This will lead to buffer not initialized at all. It is ok 
> for non compression route, while compression is used, the output data size 
> will be very small due to all 0 in buffer.
> Thus, the overrided configure method should be be the correct place for 
> initial buffer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5549) distcp app should fail if m/r job fails

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5549:

Labels: BB2015-05-TBR  (was: )

> distcp app should fail if m/r job fails
> ---
>
> Key: MAPREDUCE-5549
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5549
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: distcp, mrv2
>Affects Versions: 3.0.0
>Reporter: David Rosenstrauch
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5549-001.patch, MAPREDUCE-5549-002.patch
>
>
> I run distcpv2 in a scripted manner.  The script checks if the distcp step 
> fails and, if so, aborts the rest of the script.  However, I ran into an 
> issue today where the distcp job failed, but my calling script went on its 
> merry way.
> Digging into the code a bit more (at 
> https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java),
>  I think I see the issue:  the distcp app is not returning an error exit code 
> to the shell when the distcp job fails.  This is a big problem, IMO, as it 
> prevents distcp from being successfully used in a scripted environment.  IMO, 
> the code should change like so:
> Before:
> {code:title=org.apache.hadoop.tools.DistCp.java}
> //...
>   public int run(String[] argv) {
> //...
> try {
>   execute();
> } catch (InvalidInputException e) {
>   LOG.error("Invalid input: ", e);
>   return DistCpConstants.INVALID_ARGUMENT;
> } catch (DuplicateFileException e) {
>   LOG.error("Duplicate files in input path: ", e);
>   return DistCpConstants.DUPLICATE_INPUT;
> } catch (Exception e) {
>   LOG.error("Exception encountered ", e);
>   return DistCpConstants.UNKNOWN_ERROR;
> }
> return DistCpConstants.SUCCESS;
>   }
> //...
> {code}
> After:
> {code:title=org.apache.hadoop.tools.DistCp.java}
> //...
>   public int run(String[] argv) {
> //...
> Job job = null;
> try {
>   job = execute();
> } catch (InvalidInputException e) {
>   LOG.error("Invalid input: ", e);
>   return DistCpConstants.INVALID_ARGUMENT;
> } catch (DuplicateFileException e) {
>   LOG.error("Duplicate files in input path: ", e);
>   return DistCpConstants.DUPLICATE_INPUT;
> } catch (Exception e) {
>   LOG.error("Exception encountered ", e);
>   return DistCpConstants.UNKNOWN_ERROR;
> }
> if (job.isSuccessful()) {
>   return DistCpConstants.SUCCESS;
> }
> else {
>   return DistCpConstants.UNKNOWN_ERROR;
> }
>   }
> //...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5917) Be able to retrieve configuration keys by index

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5917:

Labels: BB2015-05-TBR  (was: )

> Be able to retrieve configuration keys by index
> ---
>
> Key: MAPREDUCE-5917
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5917
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: pipes
>Reporter: Joe Mudd
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5917.patch
>
>
> The pipes C++ side does not have a configuration key/value pair iterator.  It 
> is useful to be able to iterate through all of the configuration keys without 
> having to expose a C++ map iterator since that is specific to the JobConf 
> internals.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-3097) archive does not archive if the content specified is a file

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-3097:

Labels: BB2015-05-TBR  (was: )

> archive does not archive if the content specified is a file
> ---
>
> Key: MAPREDUCE-3097
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3097
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 0.20.203.0, 0.20.205.0
>Reporter: Arpit Gupta
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-3097.patch
>
>
> archive command only archives directories. when the content specified is a 
> file it proceeds with the archive job but does not archive the content this 
> can be misleading as the user might think that archive was successful. We 
> should change it to either throw an error or make it archive files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5608) Replace and deprecate mapred.tasktracker.indexcache.mb

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5608:

Labels: BB2015-05-TBR configuration newbie  (was: configuration newbie)

> Replace and deprecate mapred.tasktracker.indexcache.mb
> --
>
> Key: MAPREDUCE-5608
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5608
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Sandy Ryza
>Assignee: Akira AJISAKA
>  Labels: BB2015-05-TBR, configuration, newbie
> Attachments: MAPREDUCE-5608-002.patch, MAPREDUCE-5608.patch
>
>
> In MR2 mapred.tasktracker.indexcache.mb still works for configuring the size 
> of the shuffle service index cache.  As the tasktracker no longer exists, we 
> should replace this with something like mapreduce.shuffle.indexcache.mb. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6273) HistoryFileManager should check whether summaryFile exists to avoid FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6273:

Labels: BB2015-05-TBR  (was: )

> HistoryFileManager should check whether summaryFile exists to avoid 
> FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state
> 
>
> Key: MAPREDUCE-6273
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6273
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-6273.000.patch
>
>
> HistoryFileManager should check whether summaryFile exists to avoid 
> FileNotFoundException causing HistoryFileInfo into MOVE_FAILED state,
> I saw the following error message:
> {code}
> 2015-02-17 19:13:45,198 ERROR 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager: Error while trying to 
> move a job to done
> java.io.FileNotFoundException: File does not exist: 
> /user/history/done_intermediate/agd_laci-sluice/job_1423740288390_1884.summary
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:65)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:55)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1878)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1819)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1799)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1771)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:527)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(AuthorizationProviderProxyClientProtocol.java:85)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:356)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
>   at sun.reflect.GeneratedConstructorAccessor29.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>   at 
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1181)
>   at 
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1169)
>   at 
> org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1159)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:270)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:237)
>   at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:230)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1457)
>   at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:318)
>   at org.apache.hadoop.fs.Hdfs.open(Hdfs.java:59)
>   at 
> org.apache.hadoop.fs.AbstractFileSystem.open(AbstractFileSystem.java:621)
>   at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:789)
>   at org.apache.hadoop.fs.FileContext$6.next(FileContext.java:785)
>   at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
>   at org.apache.hadoop.fs.FileContext.open(FileContext.java:785)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.getJobSummary(HistoryFileManager.java:953)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileMa

[jira] [Updated] (MAPREDUCE-4919) All maps hangs when set mapreduce.task.io.sort.factor to 1

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-4919:

Labels: BB2015-05-TBR  (was: )

> All maps hangs when set mapreduce.task.io.sort.factor to 1
> --
>
> Key: MAPREDUCE-4919
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4919
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: client
>Reporter: Jerry Chen
>Assignee: Jerry Chen
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-4919.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> In one of my testing that when I set mapreduce.task.io.sort.factor to 1, all 
> the maps hang and will never end. But the CPU usage for each node are very 
> high and until killed by the app master when time out comes, and the job 
> failed. 
> I traced the problem and found out that all the maps hangs on the final merge 
> phase.
> The while loop in computeBytesInMerges will never end with a factor of 1:
> int f = 1; //in my case
> int n = 16; //in my case
> while (n > f || considerFinalMerge) {
>   ...
>   n -= (f-1);
>   f = factor;
> }
> As the f-1 will equals 0 and n will always be 16 and the while runs for ever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5951) Add support for the YARN Shared Cache

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-5951:

Labels: BB2015-05-TBR  (was: )

> Add support for the YARN Shared Cache
> -
>
> Key: MAPREDUCE-5951
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5951
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5951-trunk-v1.patch, 
> MAPREDUCE-5951-trunk-v2.patch, MAPREDUCE-5951-trunk-v3.patch, 
> MAPREDUCE-5951-trunk-v4.patch, MAPREDUCE-5951-trunk-v5.patch, 
> MAPREDUCE-5951-trunk-v6.patch, MAPREDUCE-5951-trunk-v7.patch, 
> MAPREDUCE-5951-trunk-v8.patch
>
>
> Implement the necessary changes so that the MapReduce application can 
> leverage the new YARN shared cache (i.e. YARN-1492).
> Specifically, allow per-job configuration so that MapReduce jobs can specify 
> which set of resources they would like to cache (i.e. jobjar, libjars, 
> archives, files).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6271) org.apache.hadoop.mapreduce.Cluster GetJob() display warn log

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6271:

Labels: BB2015-05-TBR  (was: )

> org.apache.hadoop.mapreduce.Cluster GetJob() display warn log
> -
>
> Key: MAPREDUCE-6271
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6271
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.7.0
>Reporter: Peng Zhang
>Assignee: Peng Zhang
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-6271.v2.patch, MR-6271.patch
>
>
> When using getJob() with MapReduce 2.7, warn log caused by configuration 
> loaded twice is displayed every time. And when job completed, this function 
> will display warn log of "java.io.FileNotFoundException"
> And I think this is related with MAPREDUCE-5875, the change in GetJob() seems 
> to be not needed, cause it's only for test.
> {noformat}
> 15/03/04 13:41:23 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 15/03/04 13:41:23 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 15/03/04 13:41:24 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 15/03/04 13:41:24 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 15/03/04 13:41:25 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 15/03/04 13:41:25 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 15/03/04 13:41:26 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 15/03/04 13:41:26 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 15/03/04 13:41:27 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 15/03/04 13:41:27 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 15/03/04 13:41:28 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 15/03/04 13:41:28 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 15/03/04 13:41:29 WARN conf.Configuration: 
> hdfsG://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 15/03/04 13:41:29 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.attempts;  Ignoring.
> 15/03/04 13:41:29 INFO exec.Task: 2015-03-04 13:41:29,853 Stage-1 map = 100%, 
>  reduce = 0%, Cumulative CPU 2.37 sec
> 15/03/04 13:41:30 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/job.xml:an
>  attempt to override final parameter: 
> mapreduce.job.end-notification.max.retry.interval;  Ignoring.
> 15/03/04 13:41:30 WARN conf.Configuration: 
> hdfs://example/yarn/example2/staging/test_user/.staging/job_1425388652704_0116/j

[jira] [Updated] (MAPREDUCE-6296) A better way to deal with InterruptedException on waitForCompletion

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated MAPREDUCE-6296:

Labels: BB2015-05-TBR  (was: )

> A better way to deal with InterruptedException on waitForCompletion
> ---
>
> Key: MAPREDUCE-6296
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6296
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Yang Hao
>Assignee: Yang Hao
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-6296.patch
>
>
> Some code in method waitForCompletion of Job class is 
> {code:title=Job.java|borderStyle=solid}
>   public boolean waitForCompletion(boolean verbose
>) throws IOException, InterruptedException,
> ClassNotFoundException {
> if (state == JobState.DEFINE) {
>   submit();
> }
> if (verbose) {
>   monitorAndPrintJob();
> } else {
>   // get the completion poll interval from the client.
>   int completionPollIntervalMillis = 
> Job.getCompletionPollInterval(cluster.getConf());
>   while (!isComplete()) {
> try {
>   Thread.sleep(completionPollIntervalMillis);
> } catch (InterruptedException ie) {
> }
>   }
> }
> return isSuccessful();
>   }
> {code}
> but a better way to deal with InterruptException is
> {code:title=Job.java|borderStyle=solid}
>   public boolean waitForCompletion(boolean verbose
>) throws IOException, InterruptedException,
> ClassNotFoundException {
> if (state == JobState.DEFINE) {
>   submit();
> }
> if (verbose) {
>   monitorAndPrintJob();
> } else {
>   // get the completion poll interval from the client.
>   int completionPollIntervalMillis = 
> Job.getCompletionPollInterval(cluster.getConf());
>   while (!isComplete()) {
> try {
>   Thread.sleep(completionPollIntervalMillis);
> } catch (InterruptedException ie) {
>   Thread.currentThread().interrupt();
> }
>   }
> }
> return isSuccessful();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >