[jira] [Created] (MAPREDUCE-5501) RMContainer Allocator loops forever after cluster shutdown in tests

2013-09-09 Thread Andrey Klochkov (JIRA)
Andrey Klochkov created MAPREDUCE-5501:
--

 Summary: RMContainer Allocator loops forever after cluster 
shutdown in tests
 Key: MAPREDUCE-5501
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5501
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: trunk
Reporter: Andrey Klochkov


After running MR job client tests many MRAppMaster processes stay alive. The 
reason seems that RMContainer Allocator thread ignores InterruptedException and 
keeps retrying:

{code}
2013-09-09 18:52:07,505 WARN [RMCommunicator Allocator] 
org.apache.hadoop.util.ThreadUtil: interrupted while sleeping
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.util.ThreadUtil.sleepAtLeastIgnoreInterrupts(ThreadUtil.java:43)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:149)
at com.sun.proxy.$Proxy29.allocate(Unknown Source)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor.makeRemoteRequest(RMContainerRequestor.java:154)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:553)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:219)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$1.run(RMCommunicator.java:236)
at java.lang.Thread.run(Thread.java:680)
2013-09-09 18:52:37,639 INFO [RMCommunicator Allocator] 
org.apache.hadoop.ipc.Client: Retrying connect to server: 
dhcpx-197-141.corp.yahoo.com/10.73.197.141:61163. Already tried 0 time(s); 
retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
SECONDS)
2013-09-09 18:52:38,640 INFO [RMCommunicator Allocator] 
org.apache.hadoop.ipc.Client: Retrying connect to server: 
dhcpx-197-141.corp.yahoo.com/10.73.197.141:61163. Already tried 1 time(s); 
retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 
SECONDS)
{code}

It takes > 6 minutes for the processes to die, and this causes various issues 
with tests which use the same DFS dir. 

{code}
2013-09-09 22:26:47,179 ERROR [RMCommunicator Allocator] 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Error communicating 
with RM: Could not contact RM after 36 milliseconds.
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Could not contact RM 
after 36 milliseconds.
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:563)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:219)
at 
org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$1.run(RMCommunicator.java:236)
at java.lang.Thread.run(Thread.java:680)
{code}

Will attach a thread dump separately. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5501) RMContainer Allocator does not stop when cluster shutdown is performed in tests

2013-09-09 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated MAPREDUCE-5501:
---

Summary: RMContainer Allocator does not stop when cluster shutdown is 
performed in tests  (was: RMContainer Allocator loops forever after cluster 
shutdown in tests)

> RMContainer Allocator does not stop when cluster shutdown is performed in 
> tests
> ---
>
> Key: MAPREDUCE-5501
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5501
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: trunk
>Reporter: Andrey Klochkov
>
> After running MR job client tests many MRAppMaster processes stay alive. The 
> reason seems that RMContainer Allocator thread ignores InterruptedException 
> and keeps retrying:
> {code}
> 2013-09-09 18:52:07,505 WARN [RMCommunicator Allocator] 
> org.apache.hadoop.util.ThreadUtil: interrupted while sleeping
> java.lang.InterruptedException: sleep interrupted
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.util.ThreadUtil.sleepAtLeastIgnoreInterrupts(ThreadUtil.java:43)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:149)
> at com.sun.proxy.$Proxy29.allocate(Unknown Source)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor.makeRemoteRequest(RMContainerRequestor.java:154)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:553)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:219)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$1.run(RMCommunicator.java:236)
> at java.lang.Thread.run(Thread.java:680)
> 2013-09-09 18:52:37,639 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.ipc.Client: Retrying connect to server: 
> dhcpx-197-141.corp.yahoo.com/10.73.197.141:61163. Already tried 0 time(s); 
> retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1 SECONDS)
> 2013-09-09 18:52:38,640 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.ipc.Client: Retrying connect to server: 
> dhcpx-197-141.corp.yahoo.com/10.73.197.141:61163. Already tried 1 time(s); 
> retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1 SECONDS)
> {code}
> It takes > 6 minutes for the processes to die, and this causes various issues 
> with tests which use the same DFS dir. 
> {code}
> 2013-09-09 22:26:47,179 ERROR [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Error 
> communicating with RM: Could not contact RM after 36 milliseconds.
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Could not contact RM 
> after 36 milliseconds.
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:563)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:219)
> at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$1.run(RMCommunicator.java:236)
> at java.lang.Thread.run(Thread.java:680)
> {code}
> Will attach a thread dump separately. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5463) Deprecate SLOTS_MILLIS counters

2013-09-09 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762761#comment-13762761
 ] 

Tsuyoshi OZAWA commented on MAPREDUCE-5463:
---

The warnings generated by the latest patch is valid, because we mark 
JobCounter.*SLOTS_MILLIS* as deprecated. For more details, please see here: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3986//artifact/trunk/patchprocess/diffJavacWarnings.txt

> Deprecate SLOTS_MILLIS counters
> ---
>
> Key: MAPREDUCE-5463
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5463
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>Affects Versions: 2.1.0-beta
>Reporter: Sandy Ryza
>Assignee: Tsuyoshi OZAWA
> Attachments: MAPREDUCE-5463.1.patch, MAPREDUCE-5463.2.patch
>
>
> As discussed in MAPREDUCE-5311, the SLOTS_MILLIS_MAPS and 
> SLOTS_MILLIS_REDUCES counters don't really make sense in MR2, and should be 
> deprecated so that they can eventually be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5497) '5s sleep' in MRAppMaster.shutDownJob is only needed before stopping ClientService

2013-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762660#comment-13762660
 ] 

Hadoop QA commented on MAPREDUCE-5497:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12602263/MAPREDUCE-5497.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3989//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3989//console

This message is automatically generated.

> '5s sleep'  in MRAppMaster.shutDownJob is only needed before stopping 
> ClientService
> ---
>
> Key: MAPREDUCE-5497
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5497
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: MAPREDUCE-5497.1.patch, MAPREDUCE-5497.1.patch, 
> MAPREDUCE-5497.2.patch, MAPREDUCE-5497.patch
>
>
> Since the '5s sleep' is for the purpose to let clients know the final states, 
> put it after other services are stopped and only before stopping 
> ClientService is enough. This can reduce some race conditions like 
> MAPREDUCE-5471

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5499) Fix synchronization issues of the setters/getters of *PBImpl which take in/return lists

2013-09-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762648#comment-13762648
 ] 

Hadoop QA commented on MAPREDUCE-5499:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12602260/MAPREDUCE-5499.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3988//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3988//console

This message is automatically generated.

> Fix synchronization issues of the setters/getters of *PBImpl which take 
> in/return lists
> ---
>
> Key: MAPREDUCE-5499
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5499
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Zhijie Shen
>Assignee: Xuan Gong
> Attachments: MAPREDUCE-5499.1.patch
>
>
> Similar to YARN-609. There're the following *PBImpls which need to be fixed:
> 1. GetDiagnosticsResponsePBImpl
> 2. GetTaskAttemptCompletionEventsResponsePBImpl
> 3. GetTaskReportsResposnePBImpl
> 4. CounterGroupPBImpl
> 5. JobReportPBImpl
> 6. TaskReportPBImpl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5497) '5s sleep' in MRAppMaster.shutDownJob is only needed before stopping ClientService

2013-09-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762641#comment-13762641
 ] 

Jian He commented on MAPREDUCE-5497:


New patch fixed the comment

> '5s sleep'  in MRAppMaster.shutDownJob is only needed before stopping 
> ClientService
> ---
>
> Key: MAPREDUCE-5497
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5497
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: MAPREDUCE-5497.1.patch, MAPREDUCE-5497.1.patch, 
> MAPREDUCE-5497.2.patch, MAPREDUCE-5497.patch
>
>
> Since the '5s sleep' is for the purpose to let clients know the final states, 
> put it after other services are stopped and only before stopping 
> ClientService is enough. This can reduce some race conditions like 
> MAPREDUCE-5471

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5497) '5s sleep' in MRAppMaster.shutDownJob is only needed before stopping ClientService

2013-09-09 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated MAPREDUCE-5497:
---

Status: Patch Available  (was: Open)

> '5s sleep'  in MRAppMaster.shutDownJob is only needed before stopping 
> ClientService
> ---
>
> Key: MAPREDUCE-5497
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5497
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: MAPREDUCE-5497.1.patch, MAPREDUCE-5497.1.patch, 
> MAPREDUCE-5497.2.patch, MAPREDUCE-5497.patch
>
>
> Since the '5s sleep' is for the purpose to let clients know the final states, 
> put it after other services are stopped and only before stopping 
> ClientService is enough. This can reduce some race conditions like 
> MAPREDUCE-5471

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5497) '5s sleep' in MRAppMaster.shutDownJob is only needed before stopping ClientService

2013-09-09 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated MAPREDUCE-5497:
---

Attachment: MAPREDUCE-5497.2.patch

> '5s sleep'  in MRAppMaster.shutDownJob is only needed before stopping 
> ClientService
> ---
>
> Key: MAPREDUCE-5497
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5497
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: MAPREDUCE-5497.1.patch, MAPREDUCE-5497.1.patch, 
> MAPREDUCE-5497.2.patch, MAPREDUCE-5497.patch
>
>
> Since the '5s sleep' is for the purpose to let clients know the final states, 
> put it after other services are stopped and only before stopping 
> ClientService is enough. This can reduce some race conditions like 
> MAPREDUCE-5471

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5499) Fix synchronization issues of the setters/getters of *PBImpl which take in/return lists

2013-09-09 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated MAPREDUCE-5499:
-

Attachment: MAPREDUCE-5499.1.patch

> Fix synchronization issues of the setters/getters of *PBImpl which take 
> in/return lists
> ---
>
> Key: MAPREDUCE-5499
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5499
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Zhijie Shen
>Assignee: Xuan Gong
> Attachments: MAPREDUCE-5499.1.patch
>
>
> Similar to YARN-609. There're the following *PBImpls which need to be fixed:
> 1. GetDiagnosticsResponsePBImpl
> 2. GetTaskAttemptCompletionEventsResponsePBImpl
> 3. GetTaskReportsResposnePBImpl
> 4. CounterGroupPBImpl
> 5. JobReportPBImpl
> 6. TaskReportPBImpl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5499) Fix synchronization issues of the setters/getters of *PBImpl which take in/return lists

2013-09-09 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated MAPREDUCE-5499:
-

Status: Patch Available  (was: Open)

> Fix synchronization issues of the setters/getters of *PBImpl which take 
> in/return lists
> ---
>
> Key: MAPREDUCE-5499
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5499
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Zhijie Shen
>Assignee: Xuan Gong
> Attachments: MAPREDUCE-5499.1.patch
>
>
> Similar to YARN-609. There're the following *PBImpls which need to be fixed:
> 1. GetDiagnosticsResponsePBImpl
> 2. GetTaskAttemptCompletionEventsResponsePBImpl
> 3. GetTaskReportsResposnePBImpl
> 4. CounterGroupPBImpl
> 5. JobReportPBImpl
> 6. TaskReportPBImpl

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5497) '5s sleep' in MRAppMaster.shutDownJob is only needed before stopping ClientService

2013-09-09 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-5497:
---

Status: Open  (was: Patch Available)

Patch looks good except for the point that you unnecessarily made ClientService 
an abstract class. ClientService can simply extend Service and still be an 
interface.

> '5s sleep'  in MRAppMaster.shutDownJob is only needed before stopping 
> ClientService
> ---
>
> Key: MAPREDUCE-5497
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5497
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: MAPREDUCE-5497.1.patch, MAPREDUCE-5497.1.patch, 
> MAPREDUCE-5497.patch
>
>
> Since the '5s sleep' is for the purpose to let clients know the final states, 
> put it after other services are stopped and only before stopping 
> ClientService is enough. This can reduce some race conditions like 
> MAPREDUCE-5471

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (MAPREDUCE-5465) Container killed before hprof dumps profile.out

2013-09-09 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash reassigned MAPREDUCE-5465:
---

Assignee: Ravi Prakash

> Container killed before hprof dumps profile.out
> ---
>
> Key: MAPREDUCE-5465
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5465
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mr-am, mrv2
>Affects Versions: 2.0.3-alpha
>Reporter: Radim Kolar
>Assignee: Ravi Prakash
>
> If there is profiling enabled for mapper or reducer then hprof dumps 
> profile.out at process exit. It is dumped after task signaled to AM that work 
> is finished.
> AM kills container with finished work without waiting for hprof to finish 
> dumps. If hprof is dumping larger outputs (such as with depth=4 while depth=3 
> works) , it could not finish dump in time before being killed making entire 
> dump unusable because cpu and heap stats are missing.
> There needs to be better delay before container is killed if profiling is 
> enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5500) Accessing task page for running job throw 500 Error code

2013-09-09 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated MAPREDUCE-5500:
--

Component/s: mr-am
   Assignee: Paul Han

Since this page is controlled by the MRAppMaster and not YARN, moving this to 
the MAPREDUCE project.

> Accessing task page for running job throw 500 Error code
> 
>
> Key: MAPREDUCE-5500
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5500
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mr-am
>Affects Versions: 2.0.5-alpha
>Reporter: Paul Han
>Assignee: Paul Han
>
> For running jobs on Hadoop 2.0, trying to access Task counters page throws 
> Server 500 error. Digging a bit I see this exception in MRAppMaster logs
> {noformat}
> 2013-08-09 21:54:35,083 ERROR [556661283@qtp-875702288-23] 
> org.apache.hadoop.yarn.webapp.Dispatcher: error handling URI: 
> /mapreduce/task/task_1376081364308_0002_m_01
> java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:150)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>  at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263)
>  at 
> com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:178)
>  at 
> com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
>  at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:62)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:900)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
>  at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
>  at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
>  at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
>  at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
>  at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>  at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:123)
>  at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>  at 
> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1069)
>  at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>  at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>  at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>  at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>  at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>  at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>  at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>  at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>  at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>  at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>  at org.mortbay.jetty.Server.handle(Server.java:326)
>  at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>  at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>  at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>  at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>  at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>  at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>  at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error rendering 
> block: nestLevel=6 expected 5
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:66)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:74)
>  at org.apache.hadoop.yarn.

[jira] [Moved] (MAPREDUCE-5500) Accessing task page for running job throw 500 Error code

2013-09-09 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe moved YARN-1174 to MAPREDUCE-5500:
-

Affects Version/s: (was: 2.0.5-alpha)
   2.0.5-alpha
  Key: MAPREDUCE-5500  (was: YARN-1174)
  Project: Hadoop Map/Reduce  (was: Hadoop YARN)

> Accessing task page for running job throw 500 Error code
> 
>
> Key: MAPREDUCE-5500
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5500
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.0.5-alpha
>Reporter: Paul Han
>
> For running jobs on Hadoop 2.0, trying to access Task counters page throws 
> Server 500 error. Digging a bit I see this exception in MRAppMaster logs
> {noformat}
> 2013-08-09 21:54:35,083 ERROR [556661283@qtp-875702288-23] 
> org.apache.hadoop.yarn.webapp.Dispatcher: error handling URI: 
> /mapreduce/task/task_1376081364308_0002_m_01
> java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:150)
>  at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>  at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:263)
>  at 
> com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:178)
>  at 
> com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
>  at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:62)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:900)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
>  at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
>  at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
>  at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
>  at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
>  at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
>  at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>  at 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.doFilter(AmIpFilter.java:123)
>  at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>  at 
> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1069)
>  at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>  at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>  at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>  at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
>  at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>  at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>  at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>  at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>  at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>  at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>  at org.mortbay.jetty.Server.handle(Server.java:326)
>  at 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>  at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
>  at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>  at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>  at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>  at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>  at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> Caused by: org.apache.hadoop.yarn.webapp.WebAppException: Error rendering 
> block: nestLevel=6 expected 5
>  at org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:66)
>  at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:74)
>  at org.apache.hadoop.yarn.webap

[jira] [Commented] (MAPREDUCE-5465) Container killed before hprof dumps profile.out

2013-09-09 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762341#comment-13762341
 ] 

Ravi Prakash commented on MAPREDUCE-5465:
-

Has any one tried fixing this? If not, I may take a crack it.

> Container killed before hprof dumps profile.out
> ---
>
> Key: MAPREDUCE-5465
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5465
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mr-am, mrv2
>Affects Versions: 2.0.3-alpha
>Reporter: Radim Kolar
>
> If there is profiling enabled for mapper or reducer then hprof dumps 
> profile.out at process exit. It is dumped after task signaled to AM that work 
> is finished.
> AM kills container with finished work without waiting for hprof to finish 
> dumps. If hprof is dumping larger outputs (such as with depth=4 while depth=3 
> works) , it could not finish dump in time before being killed making entire 
> dump unusable because cpu and heap stats are missing.
> There needs to be better delay before container is killed if profiling is 
> enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5170) incorrect exception message if min node size > min rack size

2013-09-09 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated MAPREDUCE-5170:
---

Assignee: Sangjin Lee

> incorrect exception message if min node size > min rack size
> 
>
> Key: MAPREDUCE-5170
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5170
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 2.0.3-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Trivial
> Fix For: 2.3.0
>
> Attachments: MAPREDUCE-5170.patch
>
>
> The exception message for CombineFileInputFormat if min node size > min rack 
> size is worded backwards.
> Currently it reads "Minimum split size per node... cannot be smaller than the 
> minimum split size per rack..."
> It should be "Minimum split size per node... cannot be LARGER than the 
> minimum split size per rack..."

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (MAPREDUCE-5069) add concrete common implementations of CombineFileInputFormat

2013-09-09 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated MAPREDUCE-5069:
---

Assignee: Sangjin Lee

> add concrete common implementations of CombineFileInputFormat
> -
>
> Key: MAPREDUCE-5069
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5069
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv1, mrv2
>Affects Versions: 2.0.3-alpha
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Minor
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: MAPREDUCE-5069-1.patch, MAPREDUCE-5069-2.patch, 
> MAPREDUCE-5069-3.patch, MAPREDUCE-5069-4.patch, MAPREDUCE-5069-5.patch, 
> MAPREDUCE-5069-6.patch, MAPREDUCE-5069.patch
>
>
> CombineFileInputFormat is abstract, and its specific equivalents to 
> TextInputFormat, SequenceFileInputFormat, etc. are currently not in the 
> hadoop code base.
> These sound like very common need wherever CombineFileInputFormat is used, 
> and different folks would write the same code over and over to achieve the 
> same goal. It sounds very natural for hadoop to provide at least the text and 
> sequence file implementations of the CombineFileInputFormat class.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-1176) Contribution: FixedLengthInputFormat and FixedLengthRecordReader

2013-09-09 Thread Mariappan Asokan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762009#comment-13762009
 ] 

Mariappan Asokan commented on MAPREDUCE-1176:
-

TestUberAM timeouts due to a different reason not related to this patch.  See 
MAPREDUCE-5481.

-- Asokan

> Contribution: FixedLengthInputFormat and FixedLengthRecordReader
> 
>
> Key: MAPREDUCE-1176
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1176
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Affects Versions: 2.1.0-beta, 2.0.5-alpha
> Environment: Any
>Reporter: BitsOfInfo
>Assignee: Mariappan Asokan
> Attachments: mapreduce-1176_v1.patch, MAPREDUCE-1176-v1.patch, 
> mapreduce-1176_v2.patch, MAPREDUCE-1176-v2.patch, MAPREDUCE-1176-v3.patch, 
> MAPREDUCE-1176-v4.patch
>
>
> Hello,
> I would like to contribute the following two classes for incorporation into 
> the mapreduce.lib.input package. These two classes can be used when you need 
> to read data from files containing fixed length (fixed width) records. Such 
> files have no CR/LF (or any combination thereof), no delimiters etc, but each 
> record is a fixed length, and extra data is padded with spaces. The data is 
> one gigantic line within a file.
> Provided are two classes first is the FixedLengthInputFormat and its 
> corresponding FixedLengthRecordReader. When creating a job that specifies 
> this input format, the job must have the 
> "mapreduce.input.fixedlengthinputformat.record.length" property set as follows
> myJobConf.setInt("mapreduce.input.fixedlengthinputformat.record.length",[myFixedRecordLength]);
> OR
> myJobConf.setInt(FixedLengthInputFormat.FIXED_RECORD_LENGTH, 
> [myFixedRecordLength]);
> This input format overrides computeSplitSize() in order to ensure that 
> InputSplits do not contain any partial records since with fixed records there 
> is no way to determine where a record begins if that were to occur. Each 
> InputSplit passed to the FixedLengthRecordReader will start at the beginning 
> of a record, and the last byte in the InputSplit will be the last byte of a 
> record. The override of computeSplitSize() delegates to FileInputFormat's 
> compute method, and then adjusts the returned split size by doing the 
> following: (Math.floor(fileInputFormatsComputedSplitSize / fixedRecordLength) 
> * fixedRecordLength)
> This suite of fixed length input format classes, does not support compressed 
> files. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5481) TestUberAM timeout

2013-09-09 Thread Mariappan Asokan (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762004#comment-13762004
 ] 

Mariappan Asokan commented on MAPREDUCE-5481:
-

I noticed similar timeout on my system.  Although I am not familiar with the 
code related to this problem, here is what I noticed by looking at the test 
logs.  The Resource Manager comes up listening on a random free port in 
localhost like localhost:49170.  When a client(from the test) connects through 
RMProxy it is trying to resolve localhost through name server(not by local 
hosts file.)  This causes failure to connect to Resource Manager and after 
certain number of retries the test timeouts.

> TestUberAM timeout
> --
>
> Key: MAPREDUCE-5481
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5481
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2, test
>Affects Versions: 3.0.0
>Reporter: Jason Lowe
>
> TestUberAM has been timing out on trunk for some time now and surefire then 
> fails the build.  I'm not able to reproduce it locally, but the Jenkins 
> builds have been seeing it fairly consistently.  See 
> https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1529/console

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5498) maven Junit dependency should be test only

2013-09-09 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13762077#comment-13762077
 ] 

Chris Nauroth commented on MAPREDUCE-5498:
--

Would it make sense to put {{test}} into the 
{{}} stanza of hadoop-project/pom.xml, so that all 
sub-modules are covered automatically, including any new sub-modules we might 
add in the future?

> maven Junit dependency should be test only
> --
>
> Key: MAPREDUCE-5498
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5498
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0, 2.1.0-beta
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-9935-001.patch
>
>
> The maven dependencies for the YARN artifacts don't restrict to test time, so 
> it gets picked up by all downstream users.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (MAPREDUCE-5414) TestTaskAttempt fails jdk7 with NullPointerException

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13761868#comment-13761868
 ] 

Hudson commented on MAPREDUCE-5414:
---

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1543 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1543/])
MAPREDUCE-5414. TestTaskAttempt fails in JDK7 with NPE. Contributed by Nemon 
Lou. (devaraj: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1520964)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java


> TestTaskAttempt fails jdk7 with NullPointerException
> 
>
> Key: MAPREDUCE-5414
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5414
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.5-alpha
>Reporter: Nemon Lou
>Assignee: Nemon Lou
>  Labels: java7
> Fix For: 2.1.1-beta
>
> Attachments: MAPREDUCE-5414.patch, MAPREDUCE-5414.patch
>
>
> Test case org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt fails 
> once in a while when i run all of them together.
> {code:xml} 
> Running org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 7.893 sec <<< 
> FAILURE!
> Results :
> Tests in error:
>   
> testLaunchFailedWhileKilling(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
>   
> testContainerCleanedWhileRunning(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
>   
> testContainerCleanedWhileCommitting(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
>   
> testDoubleTooManyFetchFailure(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
> {code}
> But if i run a single test case,taking testContainerCleanedWhileRunning for 
> example,it will fail without doubt.
> {code:xml} 
>   classname="org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt" 
> name="testContainerCleanedWhileRunning">
>  type="java.lang.NullPointerException">java.lang.NullPointerException
> at org.apache.hadoop.security.token.Token.write(Token.java:216)
> at 
> org.apache.hadoop.mapred.ShuffleHandler.serializeServiceData(ShuffleHandler.java:205)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.createCommonContainerLaunchContext(TaskAttemptImpl.java:695)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.createContainerLaunchContext(TaskAttemptImpl.java:751)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$ContainerAssignedTransition.transition(TaskAttemptImpl.java:1309)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$ContainerAssignedTransition.transition(TaskAttemptImpl.java:1282)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1009)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt.testContainerCleanedWhileRunning(TestTaskAttempt.java:410)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(Pa

[jira] [Commented] (MAPREDUCE-5414) TestTaskAttempt fails jdk7 with NullPointerException

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13761834#comment-13761834
 ] 

Hudson commented on MAPREDUCE-5414:
---

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1517 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1517/])
MAPREDUCE-5414. TestTaskAttempt fails in JDK7 with NPE. Contributed by Nemon 
Lou. (devaraj: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1520964)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java


> TestTaskAttempt fails jdk7 with NullPointerException
> 
>
> Key: MAPREDUCE-5414
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5414
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.5-alpha
>Reporter: Nemon Lou
>Assignee: Nemon Lou
>  Labels: java7
> Fix For: 2.1.1-beta
>
> Attachments: MAPREDUCE-5414.patch, MAPREDUCE-5414.patch
>
>
> Test case org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt fails 
> once in a while when i run all of them together.
> {code:xml} 
> Running org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 7.893 sec <<< 
> FAILURE!
> Results :
> Tests in error:
>   
> testLaunchFailedWhileKilling(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
>   
> testContainerCleanedWhileRunning(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
>   
> testContainerCleanedWhileCommitting(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
>   
> testDoubleTooManyFetchFailure(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
> {code}
> But if i run a single test case,taking testContainerCleanedWhileRunning for 
> example,it will fail without doubt.
> {code:xml} 
>   classname="org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt" 
> name="testContainerCleanedWhileRunning">
>  type="java.lang.NullPointerException">java.lang.NullPointerException
> at org.apache.hadoop.security.token.Token.write(Token.java:216)
> at 
> org.apache.hadoop.mapred.ShuffleHandler.serializeServiceData(ShuffleHandler.java:205)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.createCommonContainerLaunchContext(TaskAttemptImpl.java:695)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.createContainerLaunchContext(TaskAttemptImpl.java:751)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$ContainerAssignedTransition.transition(TaskAttemptImpl.java:1309)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$ContainerAssignedTransition.transition(TaskAttemptImpl.java:1282)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1009)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt.testContainerCleanedWhileRunning(TestTaskAttempt.java:410)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner

[jira] [Commented] (MAPREDUCE-5414) TestTaskAttempt fails jdk7 with NullPointerException

2013-09-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13761764#comment-13761764
 ] 

Hudson commented on MAPREDUCE-5414:
---

SUCCESS: Integrated in Hadoop-Yarn-trunk #327 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/327/])
MAPREDUCE-5414. TestTaskAttempt fails in JDK7 with NPE. Contributed by Nemon 
Lou. (devaraj: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1520964)
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskAttempt.java


> TestTaskAttempt fails jdk7 with NullPointerException
> 
>
> Key: MAPREDUCE-5414
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5414
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.5-alpha
>Reporter: Nemon Lou
>Assignee: Nemon Lou
>  Labels: java7
> Fix For: 2.1.1-beta
>
> Attachments: MAPREDUCE-5414.patch, MAPREDUCE-5414.patch
>
>
> Test case org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt fails 
> once in a while when i run all of them together.
> {code:xml} 
> Running org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 7.893 sec <<< 
> FAILURE!
> Results :
> Tests in error:
>   
> testLaunchFailedWhileKilling(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
>   
> testContainerCleanedWhileRunning(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
>   
> testContainerCleanedWhileCommitting(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
>   
> testDoubleTooManyFetchFailure(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
> Tests run: 9, Failures: 0, Errors: 4, Skipped: 0
> {code}
> But if i run a single test case,taking testContainerCleanedWhileRunning for 
> example,it will fail without doubt.
> {code:xml} 
>   classname="org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt" 
> name="testContainerCleanedWhileRunning">
>  type="java.lang.NullPointerException">java.lang.NullPointerException
> at org.apache.hadoop.security.token.Token.write(Token.java:216)
> at 
> org.apache.hadoop.mapred.ShuffleHandler.serializeServiceData(ShuffleHandler.java:205)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.createCommonContainerLaunchContext(TaskAttemptImpl.java:695)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.createContainerLaunchContext(TaskAttemptImpl.java:751)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$ContainerAssignedTransition.transition(TaskAttemptImpl.java:1309)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$ContainerAssignedTransition.transition(TaskAttemptImpl.java:1282)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1009)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt.testContainerCleanedWhileRunning(TestTaskAttempt.java:410)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.j