[jira] [Updated] (MAPREDUCE-3217) ant test TestAuditLogger fails on trunk

2011-11-02 Thread Devaraj K (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated MAPREDUCE-3217:
-

Status: Patch Available  (was: Open)

> ant test TestAuditLogger fails on trunk
> ---
>
> Key: MAPREDUCE-3217
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3217
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: MAPREDUCE-3217.patch
>
>
> Testcase: testKeyValLogFormat took 0.096 sec
> Testcase: testAuditLoggerWithoutIP took 0.005 sec
> Testcase: testAuditLoggerWithIP took 0.417 sec
> Caused an ERROR
> java.io.IOException: Unknown protocol: 
> org.apache.hadoop.ipc.TestRPC$TestProtocol
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:615)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> java.io.IOException: java.io.IOException: Unknown protocol: 
> org.apache.hadoop.ipc.TestRPC$TestProtocol
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:615)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at org.apache.hadoop.ipc.Client.call(Client.java:1085)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:244)
> at $Proxy6.ping(Unknown Source)
> at 
> org.apache.hadoop.mapred.TestAuditLogger.testAuditLoggerWithIP(TestAuditLogger.java:150)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3217) ant test TestAuditLogger fails on trunk

2011-11-02 Thread Devaraj K (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated MAPREDUCE-3217:
-

Attachment: MAPREDUCE-3217.patch

> ant test TestAuditLogger fails on trunk
> ---
>
> Key: MAPREDUCE-3217
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3217
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: MAPREDUCE-3217.patch
>
>
> Testcase: testKeyValLogFormat took 0.096 sec
> Testcase: testAuditLoggerWithoutIP took 0.005 sec
> Testcase: testAuditLoggerWithIP took 0.417 sec
> Caused an ERROR
> java.io.IOException: Unknown protocol: 
> org.apache.hadoop.ipc.TestRPC$TestProtocol
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:615)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> java.io.IOException: java.io.IOException: Unknown protocol: 
> org.apache.hadoop.ipc.TestRPC$TestProtocol
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:615)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at org.apache.hadoop.ipc.Client.call(Client.java:1085)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:244)
> at $Proxy6.ping(Unknown Source)
> at 
> org.apache.hadoop.mapred.TestAuditLogger.testAuditLoggerWithIP(TestAuditLogger.java:150)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3221) ant test TestSubmitJob failing on trunk

2011-11-02 Thread Devaraj K (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated MAPREDUCE-3221:
-

Status: Patch Available  (was: Open)

> ant test TestSubmitJob failing on trunk
> ---
>
> Key: MAPREDUCE-3221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3221
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: MAPREDUCE-3221.patch
>
>
> Testcase: testJobWithInvalidMemoryReqs took 2.588 sec
> Testcase: testSecureJobExecution took 4.089 sec
> FAILED
> java.io.IOException: org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol 
> org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 
> 69, server = 70)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:617)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> junit.framework.AssertionFailedError: java.io.IOException: 
> org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol 
> org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 
> 69, server = 70)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:617)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at 
> org.apache.hadoop.mapred.TestSubmitJob.testSecureJobExecution(TestSubmitJob.java:270)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3221) ant test TestSubmitJob failing on trunk

2011-11-02 Thread Devaraj K (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated MAPREDUCE-3221:
-

Attachment: MAPREDUCE-3221.patch

> ant test TestSubmitJob failing on trunk
> ---
>
> Key: MAPREDUCE-3221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3221
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: MAPREDUCE-3221.patch
>
>
> Testcase: testJobWithInvalidMemoryReqs took 2.588 sec
> Testcase: testSecureJobExecution took 4.089 sec
> FAILED
> java.io.IOException: org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol 
> org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 
> 69, server = 70)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:617)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> junit.framework.AssertionFailedError: java.io.IOException: 
> org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol 
> org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 
> 69, server = 70)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:617)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at 
> org.apache.hadoop.mapred.TestSubmitJob.testSecureJobExecution(TestSubmitJob.java:270)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3337) Missing license headers for some files

2011-11-02 Thread Arun C Murthy (Created) (JIRA)
Missing license headers for some files
--

 Key: MAPREDUCE-3337
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3337
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Arun C Murthy
Assignee: Arun C Murthy
Priority: Blocker
 Fix For: 0.23.0


Missing apache license headers for some files

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3102) NodeManager should fail fast with wrong configuration or permissions for LinuxContainerExecutor

2011-11-02 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142768#comment-13142768
 ] 

Hadoop QA commented on MAPREDUCE-3102:
--

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12502080/MR-3102.1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 4 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1245//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1245//console

This message is automatically generated.

> NodeManager should fail fast with wrong configuration or permissions for 
> LinuxContainerExecutor
> ---
>
> Key: MAPREDUCE-3102
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3102
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: security
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Hitesh Shah
> Fix For: 0.23.1
>
> Attachments: MR-3102.1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3102) NodeManager should fail fast with wrong configuration or permissions for LinuxContainerExecutor

2011-11-02 Thread Hitesh Shah (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated MAPREDUCE-3102:
---

Status: Patch Available  (was: Open)

Verified manually on a linux cluster that NM does not come up with a bad conf 
file ( tried missing group setting )

> NodeManager should fail fast with wrong configuration or permissions for 
> LinuxContainerExecutor
> ---
>
> Key: MAPREDUCE-3102
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3102
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: security
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Hitesh Shah
> Fix For: 0.23.1
>
> Attachments: MR-3102.1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3102) NodeManager should fail fast with wrong configuration or permissions for LinuxContainerExecutor

2011-11-02 Thread Hitesh Shah (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated MAPREDUCE-3102:
---

Attachment: MR-3102.1.patch

> NodeManager should fail fast with wrong configuration or permissions for 
> LinuxContainerExecutor
> ---
>
> Key: MAPREDUCE-3102
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3102
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: security
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Hitesh Shah
> Fix For: 0.23.1
>
> Attachments: MR-3102.1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3291) App fail to launch due to delegation token not found in cache

2011-11-02 Thread David Capwell (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Capwell updated MAPREDUCE-3291:
-

Attachment: hadoop.err

Here is the same error but it was found in the NM logs.

> App fail to launch due to delegation token not found in cache
> -
>
> Key: MAPREDUCE-3291
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3291
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Ramya Sunil
>Priority: Critical
> Fix For: 0.23.1
>
> Attachments: Log-MAPREDUCE-3291.rtf, hadoop.err
>
>
> In secure mode, saw an app failure due to 
> "org.apache.hadoop.security.token.SecretManager$InvalidToken: token 
> (HDFS_DELEGATION_TOKEN token  for ) can't be found in cache" 
> Exception in the next comment.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3311) Bump jetty to 6.1.26

2011-11-02 Thread Konstantin Shvachko (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142730#comment-13142730
 ] 

Konstantin Shvachko commented on MAPREDUCE-3311:


TestMiniMRChildTask passed for me. Actually all MR tests passed. But 
test-contrib is failing with
{code}
[ivy:resolve]   ::
[ivy:resolve]   ::  UNRESOLVED DEPENDENCIES ::
[ivy:resolve]   ::
[ivy:resolve]   :: org.mortbay.jetty#jsp-api-2.1;6.1.26: not found
[ivy:resolve]   :: org.mortbay.jetty#jsp-2.1;6.1.26: not found
[ivy:resolve]   ::
{code}


> Bump jetty to 6.1.26
> 
>
> Key: MAPREDUCE-3311
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3311
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: MAPREDUCE-3311.patch
>
>
> MapReduce part of HADOOP-7450

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3139) SlivePartitioner generates negative partitions

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142727#comment-13142727
 ] 

Hudson commented on MAPREDUCE-3139:
---

Integrated in Hadoop-Mapreduce-22-branch #87 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-22-branch/87/])
MAPREDUCE-3139. Merge from trunk to 0.22.

jghoman : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196782
Files : 
* /hadoop/common/branches/branch-0.22/mapreduce/CHANGES.txt
* 
/hadoop/common/branches/branch-0.22/mapreduce/src/test/mapred/org/apache/hadoop/fs/slive/SlivePartitioner.java


> SlivePartitioner generates negative partitions
> --
>
> Key: MAPREDUCE-3139
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3139
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Jakob Homan
> Fix For: 0.20.206.0, 0.22.0, 0.24.0
>
> Attachments: MR-3139-0.patch
>
>
> {{SlivePartitioner.getPartition()}} returns negative partition numbers on 
> some occasions, which is illegal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3336) com.google.inject.internal.Preconditions not public api - shouldn't be using it

2011-11-02 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142697#comment-13142697
 ] 

Hadoop QA commented on MAPREDUCE-3336:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12502044/MAPREDUCE-3336.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1244//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1244//console

This message is automatically generated.

> com.google.inject.internal.Preconditions not public api - shouldn't be using 
> it
> ---
>
> Key: MAPREDUCE-3336
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3336
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Attachments: MAPREDUCE-3336.patch
>
>
> com.google.inject.internal.Preconditions does not exist in guice 3.0 and from 
> in guice 2.0 it was an internal api and shouldn't have been used.   We should 
> use com.google.common.base.Preconditions instead.
> This is currently being used in 
> hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-1100) User's task-logs filling up local disks on the TaskTrackers

2011-11-02 Thread Milind Bhandarkar (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142664#comment-13142664
 ] 

Milind Bhandarkar commented on MAPREDUCE-1100:
--

I took a look at this and other jira's to make it work with Hadoop 0.22. I 
think that based on the multiple dependencies, this will require a lot of 
changes to be pulled in from the 0.20.2xx branch, and is not worth the risk. I 
know of at least a handful of production deployments that have circumvented 
this with a simple cron job looking at log dirs and doing cleanups outside of 
the framework, and do not this this to be a blocker.

Konstantin, it's your call now.

> User's task-logs filling up local disks on the TaskTrackers
> ---
>
> Key: MAPREDUCE-1100
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1100
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: tasktracker
>Affects Versions: 0.20.1, 0.20.2, 0.21.0
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Fix For: 0.21.1, 0.22.0
>
> Attachments: MAPREDUCE-1100-20091102.txt, 
> MAPREDUCE-1100-20091106.txt, MAPREDUCE-1100-20091216.2.txt, 
> patch-1100-fix-ydist.2.txt, reducetask-log-level.patch
>
>
> Some user's jobs are filling up TT disks by outrageous logging. 
> mapreduce.task.userlog.limit.kb is not enabled on the cluster. Disks are 
> getting filled up before task-log cleanup via 
> mapred.task.userlog.retain.hours can kick in.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3324) Not All HttpServer tools links (stacks,logs,config,metrics) are accessible through all UI servers

2011-11-02 Thread Jonathan Eagles (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142617#comment-13142617
 ] 

Jonathan Eagles commented on MAPREDUCE-3324:


I manually tested all 5 links that were added.

> Not All HttpServer tools links (stacks,logs,config,metrics) are accessible 
> through all UI servers
> -
>
> Key: MAPREDUCE-3324
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3324
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver, mrv2, nodemanager
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Critical
> Attachments: MAPREDUCE-3324.patch
>
>
> Nodemanager has no tools listed under tools UI.
> Jobhistory server has no logs tool listed under tools UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3336) com.google.inject.internal.Preconditions not public api - shouldn't be using it

2011-11-02 Thread Thomas Graves (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated MAPREDUCE-3336:
-

Status: Patch Available  (was: Open)

> com.google.inject.internal.Preconditions not public api - shouldn't be using 
> it
> ---
>
> Key: MAPREDUCE-3336
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3336
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Attachments: MAPREDUCE-3336.patch
>
>
> com.google.inject.internal.Preconditions does not exist in guice 3.0 and from 
> in guice 2.0 it was an internal api and shouldn't have been used.   We should 
> use com.google.common.base.Preconditions instead.
> This is currently being used in 
> hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3336) com.google.inject.internal.Preconditions not public api - shouldn't be using it

2011-11-02 Thread Thomas Graves (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated MAPREDUCE-3336:
-

Attachment: MAPREDUCE-3336.patch

> com.google.inject.internal.Preconditions not public api - shouldn't be using 
> it
> ---
>
> Key: MAPREDUCE-3336
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3336
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Attachments: MAPREDUCE-3336.patch
>
>
> com.google.inject.internal.Preconditions does not exist in guice 3.0 and from 
> in guice 2.0 it was an internal api and shouldn't have been used.   We should 
> use com.google.common.base.Preconditions instead.
> This is currently being used in 
> hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (MAPREDUCE-3336) com.google.inject.internal.Preconditions not public api - shouldn't be using it

2011-11-02 Thread Thomas Graves (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves reassigned MAPREDUCE-3336:


Assignee: Thomas Graves

> com.google.inject.internal.Preconditions not public api - shouldn't be using 
> it
> ---
>
> Key: MAPREDUCE-3336
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3336
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
>Priority: Critical
> Attachments: MAPREDUCE-3336.patch
>
>
> com.google.inject.internal.Preconditions does not exist in guice 3.0 and from 
> in guice 2.0 it was an internal api and shouldn't have been used.   We should 
> use com.google.common.base.Preconditions instead.
> This is currently being used in 
> hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3336) com.google.inject.internal.Preconditions not public api - shouldn't be using it

2011-11-02 Thread Thomas Graves (Created) (JIRA)
com.google.inject.internal.Preconditions not public api - shouldn't be using it
---

 Key: MAPREDUCE-3336
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3336
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: mrv2
Affects Versions: 0.23.0
Reporter: Thomas Graves
Priority: Critical
 Attachments: MAPREDUCE-3336.patch

com.google.inject.internal.Preconditions does not exist in guice 3.0 and from 
in guice 2.0 it was an internal api and shouldn't have been used.   We should 
use com.google.common.base.Preconditions instead.

This is currently being used in 
hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3219) ant test TestDelegationToken failing on trunk

2011-11-02 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142567#comment-13142567
 ] 

Hadoop QA commented on MAPREDUCE-3219:
--

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12502035/MR-3219.1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1243//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1243//console

This message is automatically generated.

> ant test TestDelegationToken failing on trunk
> -
>
> Key: MAPREDUCE-3219
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3219
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: MR-3219.1.patch
>
>
> Testcase: testDelegationToken took 2.043 sec
> Caused an ERROR
> Client Hitesh tries to renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> org.apache.hadoop.security.AccessControlException: Client Hitesh tries to 
> renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> at 
> org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Cluster.java:399)
> at 
> org.apache.hadoop.mapred.JobClient$Renewer.renew(JobClient.java:475)
> at org.apache.hadoop.security.token.Token.renew(Token.java:310)
> at 
> org.apache.hadoop.mapred.JobClient.renewDelegationToken(JobClient.java:1088)
> at 
> org.apache.hadoop.mapreduce.security.token.delegation.TestDelegationToken.testDelegationToken(TestDelegationToken.java:89)
> Caused by: org.apache.hadoop.security.AccessControlException: Client Hitesh 
> tries to renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretM

[jira] [Commented] (MAPREDUCE-3303) MR part of removing RecordIO (HADOOP-7781)

2011-11-02 Thread Klaas Bosteels (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142546#comment-13142546
 ] 

Klaas Bosteels commented on MAPREDUCE-3303:
---

Not sure I agree with the removal of RecordIO, but that's a different 
discussion.. :)

The TypedBytes functionality in Streaming doesn't really have an actual hard 
decency on RecordIO, it just supports it. RecordIO records are automatically 
converted to typed bytes when they are taken as input so that streaming 
programs can easily read sequence files that contain them, but removing the 
RecordIO support will not break typed bytes based Streaming altogether.

I guess we should add support for Avro (instead) though if that's going to be 
the new standard, but that could be a separate JIRA issue.

> MR part of removing RecordIO (HADOOP-7781)
> --
>
> Key: MAPREDUCE-3303
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3303
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Assignee: Harsh J
>Priority: Minor
> Fix For: 0.24.0
>
>
> This is the MR part of removing deprecated RecordIO packages - parented by 
> HADOOP-7781.
> Basically, we need to remove 
> {{/hadoop-mapreduce-project/src/c++/librecordio}} and all associated build 
> helpers around it.
> (For posterity, RecordIO has been replaced by Apache Avro 
> http://avro.apache.org)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3139) SlivePartitioner generates negative partitions

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142543#comment-13142543
 ] 

Hudson commented on MAPREDUCE-3139:
---

Integrated in Hadoop-Common-0.23-Commit #137 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/137/])
MAPREDUCE-3139. Merge from trunk to 0.23.

jghoman : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196783
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs/slive/SlivePartitioner.java


> SlivePartitioner generates negative partitions
> --
>
> Key: MAPREDUCE-3139
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3139
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Jakob Homan
> Fix For: 0.20.206.0, 0.22.0, 0.24.0
>
> Attachments: MR-3139-0.patch
>
>
> {{SlivePartitioner.getPartition()}} returns negative partition numbers on 
> some occasions, which is illegal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3139) SlivePartitioner generates negative partitions

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142538#comment-13142538
 ] 

Hudson commented on MAPREDUCE-3139:
---

Integrated in Hadoop-Hdfs-0.23-Commit #138 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/138/])
MAPREDUCE-3139. Merge from trunk to 0.23.

jghoman : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196783
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs/slive/SlivePartitioner.java


> SlivePartitioner generates negative partitions
> --
>
> Key: MAPREDUCE-3139
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3139
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Jakob Homan
> Fix For: 0.20.206.0, 0.22.0, 0.24.0
>
> Attachments: MR-3139-0.patch
>
>
> {{SlivePartitioner.getPartition()}} returns negative partition numbers on 
> some occasions, which is illegal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3139) SlivePartitioner generates negative partitions

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142533#comment-13142533
 ] 

Hudson commented on MAPREDUCE-3139:
---

Integrated in Hadoop-Common-trunk-Commit #1236 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1236/])
MAPREDUCE-3139. SlivePartitioner generates negative partitions.

jghoman : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196776
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs/slive/SlivePartitioner.java


> SlivePartitioner generates negative partitions
> --
>
> Key: MAPREDUCE-3139
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3139
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Jakob Homan
> Fix For: 0.20.206.0, 0.22.0, 0.24.0
>
> Attachments: MR-3139-0.patch
>
>
> {{SlivePartitioner.getPartition()}} returns negative partition numbers on 
> some occasions, which is illegal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3139) SlivePartitioner generates negative partitions

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142525#comment-13142525
 ] 

Hudson commented on MAPREDUCE-3139:
---

Integrated in Hadoop-Hdfs-trunk-Commit #1311 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1311/])
MAPREDUCE-3139. SlivePartitioner generates negative partitions.

jghoman : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196776
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs/slive/SlivePartitioner.java


> SlivePartitioner generates negative partitions
> --
>
> Key: MAPREDUCE-3139
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3139
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Jakob Homan
> Fix For: 0.20.206.0, 0.22.0, 0.24.0
>
> Attachments: MR-3139-0.patch
>
>
> {{SlivePartitioner.getPartition()}} returns negative partition numbers on 
> some occasions, which is illegal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3219) ant test TestDelegationToken failing on trunk

2011-11-02 Thread Hitesh Shah (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated MAPREDUCE-3219:
---

Attachment: MR-3219.1.patch

> ant test TestDelegationToken failing on trunk
> -
>
> Key: MAPREDUCE-3219
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3219
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: MR-3219.1.patch
>
>
> Testcase: testDelegationToken took 2.043 sec
> Caused an ERROR
> Client Hitesh tries to renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> org.apache.hadoop.security.AccessControlException: Client Hitesh tries to 
> renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> at 
> org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Cluster.java:399)
> at 
> org.apache.hadoop.mapred.JobClient$Renewer.renew(JobClient.java:475)
> at org.apache.hadoop.security.token.Token.renew(Token.java:310)
> at 
> org.apache.hadoop.mapred.JobClient.renewDelegationToken(JobClient.java:1088)
> at 
> org.apache.hadoop.mapreduce.security.token.delegation.TestDelegationToken.testDelegationToken(TestDelegationToken.java:89)
> Caused by: org.apache.hadoop.security.AccessControlException: Client Hitesh 
> tries to renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at org.apache.hadoop.ipc.Client.call(Client.java:1085)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:244)
> at $Proxy11.renewDelegationToken(Unknown Source)
> at 
> org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Cluster.java:397)

--
This message is a

[jira] [Updated] (MAPREDUCE-3219) ant test TestDelegationToken failing on trunk

2011-11-02 Thread Hitesh Shah (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated MAPREDUCE-3219:
---

Status: Patch Available  (was: Open)

> ant test TestDelegationToken failing on trunk
> -
>
> Key: MAPREDUCE-3219
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3219
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Minor
> Fix For: 0.23.1
>
> Attachments: MR-3219.1.patch
>
>
> Testcase: testDelegationToken took 2.043 sec
> Caused an ERROR
> Client Hitesh tries to renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> org.apache.hadoop.security.AccessControlException: Client Hitesh tries to 
> renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> at 
> org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Cluster.java:399)
> at 
> org.apache.hadoop.mapred.JobClient$Renewer.renew(JobClient.java:475)
> at org.apache.hadoop.security.token.Token.renew(Token.java:310)
> at 
> org.apache.hadoop.mapred.JobClient.renewDelegationToken(JobClient.java:1088)
> at 
> org.apache.hadoop.mapreduce.security.token.delegation.TestDelegationToken.testDelegationToken(TestDelegationToken.java:89)
> Caused by: org.apache.hadoop.security.AccessControlException: Client Hitesh 
> tries to renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at org.apache.hadoop.ipc.Client.call(Client.java:1085)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:244)
> at $Proxy11.renewDelegationToken(Unknown Source)
> at 
> org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Cluster.java:397)

--
This mes

[jira] [Assigned] (MAPREDUCE-3219) ant test TestDelegationToken failing on trunk

2011-11-02 Thread Hitesh Shah (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah reassigned MAPREDUCE-3219:
--

Assignee: Hitesh Shah

> ant test TestDelegationToken failing on trunk
> -
>
> Key: MAPREDUCE-3219
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3219
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Minor
> Fix For: 0.23.1
>
>
> Testcase: testDelegationToken took 2.043 sec
> Caused an ERROR
> Client Hitesh tries to renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> org.apache.hadoop.security.AccessControlException: Client Hitesh tries to 
> renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> at 
> org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Cluster.java:399)
> at 
> org.apache.hadoop.mapred.JobClient$Renewer.renew(JobClient.java:475)
> at org.apache.hadoop.security.token.Token.renew(Token.java:310)
> at 
> org.apache.hadoop.mapred.JobClient.renewDelegationToken(JobClient.java:1088)
> at 
> org.apache.hadoop.mapreduce.security.token.delegation.TestDelegationToken.testDelegationToken(TestDelegationToken.java:89)
> Caused by: org.apache.hadoop.security.AccessControlException: Client Hitesh 
> tries to renew a token with renewer specified as alice
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:239)
> at 
> org.apache.hadoop.mapred.JobTracker.renewDelegationToken(JobTracker.java:4829)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:632)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1517)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1152)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1511)
> at org.apache.hadoop.ipc.Client.call(Client.java:1085)
> at 
> org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:244)
> at $Proxy11.renewDelegationToken(Unknown Source)
> at 
> org.apache.hadoop.mapreduce.Cluster.renewDelegationToken(Cluster.java:397)

--
This message is automatically generated by JIRA.
If you th

[jira] [Commented] (MAPREDUCE-3286) Unit tests for MAPREDUCE-3186 - User jobs are getting hanged if the Resource manager process goes down and comes up while job is getting executed.

2011-11-02 Thread Konstantin Shvachko (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142502#comment-13142502
 ] 

Konstantin Shvachko commented on MAPREDUCE-3286:


This is an example. They are hanging out there since September 15
{code}
jenkins  32743  0.0  0.8 1803048 144544 ?  Sl   Oct20   3:38 
/home/jenkins/tools/java/latest/bin/java -Dlog4j.config 
uration=container-log4j.properties 
-Dyarn.app.mapreduce.container.log.dir=/home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/org.apache.hadoop.mapreduce.v2.TestMRJobs/org.apache.hadoop.mapreduce.v2.TestMRJobs-logDir/application_1319147575527_0003/container_1319147575527_0003_01_01
 
-Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
-Xmx1536m  org.apache.hadoop.mapreduce.v2.app.MRAppMaster
{code}
I'll be killing them manually now.

> Unit tests for MAPREDUCE-3186 - User jobs are getting hanged if the Resource 
> manager process goes down and comes up while job is getting executed.
> --
>
> Key: MAPREDUCE-3286
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3286
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>  Components: mrv2
>Affects Versions: 0.23.0
> Environment: linux
>Reporter: Eric Payne
>Assignee: Eric Payne
>  Labels: test
>
> If the resource manager is restarted while the job execution is in progress, 
> the job is getting hanged.
> UI shows the job as running.
> In the RM log, it is throwing an error "ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: 
> AppAttemptId doesnt exist in cache appattempt_1318579738195_0004_01"
> In the console MRAppMaster and Runjar processes are not getting killed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3286) Unit tests for MAPREDUCE-3186 - User jobs are getting hanged if the Resource manager process goes down and comes up while job is getting executed.

2011-11-02 Thread Konstantin Shvachko (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142495#comment-13142495
 ] 

Konstantin Shvachko commented on MAPREDUCE-3286:


I see many > 70 of those hanging on hadoop7 and breaking other builds.

> Unit tests for MAPREDUCE-3186 - User jobs are getting hanged if the Resource 
> manager process goes down and comes up while job is getting executed.
> --
>
> Key: MAPREDUCE-3286
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3286
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>  Components: mrv2
>Affects Versions: 0.23.0
> Environment: linux
>Reporter: Eric Payne
>Assignee: Eric Payne
>  Labels: test
>
> If the resource manager is restarted while the job execution is in progress, 
> the job is getting hanged.
> UI shows the job as running.
> In the RM log, it is throwing an error "ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService: 
> AppAttemptId doesnt exist in cache appattempt_1318579738195_0004_01"
> In the console MRAppMaster and Runjar processes are not getting killed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3139) SlivePartitioner generates negative partitions

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142487#comment-13142487
 ] 

Hudson commented on MAPREDUCE-3139:
---

Integrated in Hadoop-Mapreduce-0.23-Commit #148 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/148/])
MAPREDUCE-3139. Merge from trunk to 0.23.

jghoman : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196783
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs/slive/SlivePartitioner.java


> SlivePartitioner generates negative partitions
> --
>
> Key: MAPREDUCE-3139
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3139
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Jakob Homan
> Fix For: 0.20.206.0, 0.22.0, 0.24.0
>
> Attachments: MR-3139-0.patch
>
>
> {{SlivePartitioner.getPartition()}} returns negative partition numbers on 
> some occasions, which is illegal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3139) SlivePartitioner generates negative partitions

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142469#comment-13142469
 ] 

Hudson commented on MAPREDUCE-3139:
---

Integrated in Hadoop-Mapreduce-trunk-Commit #1258 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1258/])
MAPREDUCE-3139. SlivePartitioner generates negative partitions.

jghoman : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196776
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs/slive/SlivePartitioner.java


> SlivePartitioner generates negative partitions
> --
>
> Key: MAPREDUCE-3139
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3139
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Jakob Homan
> Fix For: 0.20.206.0, 0.22.0, 0.24.0
>
> Attachments: MR-3139-0.patch
>
>
> {{SlivePartitioner.getPartition()}} returns negative partition numbers on 
> some occasions, which is illegal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3324) Not All HttpServer tools links (stacks,logs,config,metrics) are accessible through all UI servers

2011-11-02 Thread jirapos...@reviews.apache.org (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142467#comment-13142467
 ] 

jirapos...@reviews.apache.org commented on MAPREDUCE-3324:
--


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/2665/#review3018
---

Ship it!


I assume that you brought it up and the links work.  If so then it looks good 
to me.

- Robert


On 2011-11-01 21:15:14, Jonathan Eagles wrote:
bq.  
bq.  ---
bq.  This is an automatically generated e-mail. To reply, visit:
bq.  https://reviews.apache.org/r/2665/
bq.  ---
bq.  
bq.  (Updated 2011-11-01 21:15:14)
bq.  
bq.  
bq.  Review request for Tom Graves, Robert Evans and Mark Holderbaugh.
bq.  
bq.  
bq.  Summary
bq.  ---
bq.  
bq.  Nodemanager has no tools listed under tools UI.
bq.  Jobhistory server has no logs tool listed under tools UI.
bq.  
bq.  
bq.  This addresses bug MAPREDUCE-3324.
bq.  http://issues.apache.org/jira/browse/MAPREDUCE-3324
bq.  
bq.  
bq.  Diffs
bq.  -
bq.  
bq.
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsNavBlock.java
 8d3ccff 
bq.
hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NavBlock.java
 01ea4aa 
bq.  
bq.  Diff: https://reviews.apache.org/r/2665/diff
bq.  
bq.  
bq.  Testing
bq.  ---
bq.  
bq.  Manually verified that Tools navigation bar on NM and Job History contain 
stacks,logs,config,metrics
bq.  
bq.  
bq.  Thanks,
bq.  
bq.  Jonathan
bq.  
bq.



> Not All HttpServer tools links (stacks,logs,config,metrics) are accessible 
> through all UI servers
> -
>
> Key: MAPREDUCE-3324
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3324
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver, mrv2, nodemanager
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Critical
> Attachments: MAPREDUCE-3324.patch
>
>
> Nodemanager has no tools listed under tools UI.
> Jobhistory server has no logs tool listed under tools UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (MAPREDUCE-3102) NodeManager should fail fast with wrong configuration or permissions for LinuxContainerExecutor

2011-11-02 Thread Hitesh Shah (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah reassigned MAPREDUCE-3102:
--

Assignee: Hitesh Shah

> NodeManager should fail fast with wrong configuration or permissions for 
> LinuxContainerExecutor
> ---
>
> Key: MAPREDUCE-3102
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3102
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: security
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Hitesh Shah
> Fix For: 0.23.1
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3139) SlivePartitioner generates negative partitions

2011-11-02 Thread Jakob Homan (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated MAPREDUCE-3139:
---

   Resolution: Fixed
Fix Version/s: 0.22.0
   0.20.206.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to 22-trunk.  Resolving.

> SlivePartitioner generates negative partitions
> --
>
> Key: MAPREDUCE-3139
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3139
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Shvachko
>Assignee: Jakob Homan
> Fix For: 0.20.206.0, 0.22.0, 0.24.0
>
> Attachments: MR-3139-0.patch
>
>
> {{SlivePartitioner.getPartition()}} returns negative partition numbers on 
> some occasions, which is illegal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3320) Error conditions in web apps should stop pages from rendering.

2011-11-02 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated MAPREDUCE-3320:
---

Priority: Major  (was: Critical)
Target Version/s: 0.24.0, 0.23.1  (was: 0.23.1, 0.24.0)

> Error conditions in web apps should stop pages from rendering.
> --
>
> Key: MAPREDUCE-3320
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3320
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Fix For: 0.24.0, 0.23.1
>
>
> There are several places in the web apps where an error condition should 
> short circuit the page from rendering, but it does not.  Ideally the web app 
> framework should be extended to support exceptions similar to Jersey that can 
> have an HTTP return code associated with them.  Then all of the places that 
> produce custom error pages can just throw these exceptions instead. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-3334) TaskRunner should log its activities

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-3334.
-

Resolution: Won't Fix

> TaskRunner should log its activities
> 
>
> Key: MAPREDUCE-3334
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3334
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.20.2
>Reporter: Allen Wittenauer
>Priority: Minor
>
> TaskRunner has little to no information that it logs, making it impossible to 
> debug when something goes wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3215) org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk

2011-11-02 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142415#comment-13142415
 ] 

Hadoop QA commented on MAPREDUCE-3215:
--

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12501993/MR-3215.1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1242//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1242//console

This message is automatically generated.

> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk
> --
>
> Key: MAPREDUCE-3215
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3215
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: MR-3215.1.patch
>
>
> Testcase: testNoJobSetupCleanup took 13.271 sec
> FAILED
> Number of part-files is 0 and not 1
> junit.framework.AssertionFailedError: Number of part-files is 0 and not 1
> at 
> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup.submitAndValidateJob(TestNoJobSetupCleanup.java:60)
> at 
> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup.testNoJobSetupCleanup(TestNoJobSetupCleanup.java:70)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3215) org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk

2011-11-02 Thread Hitesh Shah (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated MAPREDUCE-3215:
---

Status: Patch Available  (was: Open)

> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk
> --
>
> Key: MAPREDUCE-3215
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3215
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: MR-3215.1.patch
>
>
> Testcase: testNoJobSetupCleanup took 13.271 sec
> FAILED
> Number of part-files is 0 and not 1
> junit.framework.AssertionFailedError: Number of part-files is 0 and not 1
> at 
> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup.submitAndValidateJob(TestNoJobSetupCleanup.java:60)
> at 
> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup.testNoJobSetupCleanup(TestNoJobSetupCleanup.java:70)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3215) org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk

2011-11-02 Thread Hitesh Shah (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated MAPREDUCE-3215:
---

Attachment: MR-3215.1.patch

> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk
> --
>
> Key: MAPREDUCE-3215
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3215
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Minor
> Fix For: 0.24.0
>
> Attachments: MR-3215.1.patch
>
>
> Testcase: testNoJobSetupCleanup took 13.271 sec
> FAILED
> Number of part-files is 0 and not 1
> junit.framework.AssertionFailedError: Number of part-files is 0 and not 1
> at 
> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup.submitAndValidateJob(TestNoJobSetupCleanup.java:60)
> at 
> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup.testNoJobSetupCleanup(TestNoJobSetupCleanup.java:70)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-991) HadoopPipes.cc doesn't compile cleanly with SunStudio

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-991.


Resolution: Won't Fix

Hadoop is not focused on portability.

> HadoopPipes.cc doesn't compile cleanly with SunStudio
> -
>
> Key: MAPREDUCE-991
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-991
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>
> Attempting to compile HadoopPipes.cc throws the following warnings and errors:
> {noformat}
>  [exec] 
> "/export/home/awittena/src/hadoop-0.20.0/src/c++/pipes/impl/HadoopPipes.cc", 
> line 663: Warning: protocol hides HadoopPipes::TaskContextImpl::protocol.
>  [exec] 
> "/export/home/awittena/src/hadoop-0.20.0/src/c++/pipes/impl/HadoopPipes.cc", 
> line 844: Warning: status hides HadoopPipes::TaskContextImpl::status.
>  [exec] 
> "/export/home/awittena/src/hadoop-0.20.0/src/c++/pipes/impl/HadoopPipes.cc", 
> line 871: Warning: key hides HadoopPipes::TaskContextImpl::key.
>  [exec] 
> "/export/home/awittena/src/hadoop-0.20.0/src/c++/pipes/impl/HadoopPipes.cc", 
> line 871: Warning: value hides HadoopPipes::TaskContextImpl::value.
>  [exec] 
> "/export/home/awittena/src/hadoop-0.20.0/src/c++/pipes/impl/HadoopPipes.cc", 
> line 943: Error: The function "sleep" must have a prototype.
>  [exec] 
> "/export/home/awittena/src/hadoop-0.20.0/src/c++/pipes/impl/HadoopPipes.cc", 
> line 961: Error: The function "close" must have a prototype.
>  [exec] 
> "/export/home/awittena/src/hadoop-0.20.0/src/c++/pipes/impl/HadoopPipes.cc", 
> line 1037: Warning (Anachronism): Formal argument 3 of type extern "C" 
> void*(*)(void*) in call to pthread_create(unsigned*, const _pthread_attr*, 
> extern "C" void*(*)(void*), void*) is being passed void*(*)(void*).
>  [exec] 
> "/export/home/awittena/src/hadoop-0.20.0/src/c++/pipes/impl/HadoopPipes.cc", 
> line 1057: Error: The function "close" must have a prototype.
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1058) analysehistory.jsp should report node where task ran

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1058.
-

Resolution: Won't Fix

> analysehistory.jsp should report node where task ran
> 
>
> Key: MAPREDUCE-1058
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1058
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>
> It is kind of painful to determine which nodes which tasks ran on.  It would 
> be useful to list this in the web ui, especially for the best/worse 
> performing tasks.   Using that information, it might be easier to see nodes 
> that are over/under performing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1162) Job history should keep track of which task trackers were blacklisted

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1162.
-

Resolution: Won't Fix

> Job history should keep track of which task trackers were blacklisted
> -
>
> Key: MAPREDUCE-1162
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1162
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>Priority: Trivial
>
> It would be useful to have job history keep track of which nodes were 
> blacklisted by the job.  This would be used to build a history of job failure 
> on certain nodes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1166) SerialUtils.cc: dynamic allocation of arrays based on runtime variable is not portable

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1166.
-

Resolution: Won't Fix

Hadoop's focus is not on portability.  Closing as won't fix.

> SerialUtils.cc: dynamic allocation of arrays based on runtime variable is not 
> portable
> --
>
> Key: MAPREDUCE-1166
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1166
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: MAPREREDUCE-1166.patch
>
>
> In SerialUtils.cc, the following code appears:
> int len;
> if (b < -120) {
>   negative = true;
>   len = -120 - b;
> } else {
>   negative = false;
>   len = -112 - b;
> }
> uint8_t barr[len];
> as far as I'm aware, this is not legal in ANSI C and will be rejected by ANSI 
> compliant compilers.  Instead, this should be malloc()'d based upon the size 
> of len and free()'d later.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3335) rat check seems to be broken

2011-11-02 Thread Arun C Murthy (Created) (JIRA)
rat check seems to be broken


 Key: MAPREDUCE-3335
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3335
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Arun C Murthy
 Fix For: 0.23.1


The rat check seems broken, we don't get warned for files without license 
headers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1184) mapred.reduce.slowstart.completed.maps is too low by default

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1184.
-

Resolution: Won't Fix

> mapred.reduce.slowstart.completed.maps is too low by default
> 
>
> Key: MAPREDUCE-1184
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1184
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.20.1, 0.20.2
>Reporter: Allen Wittenauer
>
> By default, this value is set to 5%.  I believe for most real world 
> situations the code isn't efficient enough to be set this low.  This should 
> be higher, probably around the 50% mark, especially given the predominance of 
> non-FIFO schedulers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1180) Detailed area chart of map/reduce slots usage

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1180.
-

Resolution: Won't Fix

This won't be a part of Hadoop.

> Detailed area chart of map/reduce slots usage
> -
>
> Key: MAPREDUCE-1180
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1180
> Project: Hadoop Map/Reduce
>  Issue Type: Wish
>Reporter: Allen Wittenauer
>Priority: Minor
> Attachments: samplechart.png
>
>
> People are always looking for ideas of things to implement... so here's one. 
> :)
> I'd like an app that I can throw at a JobHistory directory that would show me 
> detailed slot usage by job, user, pool, etc, in an area stacked chart format. 
>  This would be very helpful to determine if a particular job, user, or pool 
> is under/over utilizing the capacity, if we need more capacity, what time 
> slots have holes, etc.  I'll see if I can create an example in Excel of what 
> I'm thinking of.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1187) mradmin -refreshNodes should be implemented

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1187.
-

Resolution: Won't Fix

> mradmin -refreshNodes should be implemented
> ---
>
> Key: MAPREDUCE-1187
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1187
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>
> dfsadmin -refreshNodes re-reads the include/exclude files for the HDFS, 
> triggers decommisions, etc.  The MapReduce framework should have similar 
> functionality using the same parameter to mradmin.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1237) Job with no maps or reduces creates graph with XML parsing error

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1237.
-

Resolution: Won't Fix

> Job with no maps or reduces creates graph with XML parsing error
> 
>
> Key: MAPREDUCE-1237
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1237
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 0.20.1, 0.20.2
>Reporter: Allen Wittenauer
>
> For some reason, a job that had zero maps and zero reduces got submitted.  
> When looking at the details of this job in the jobtracker ui, the map 
> completion graph was an XML error rather than something more meaningful.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1206) Need a better answer for "how many reduces?"

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1206.
-

Resolution: Won't Fix

> Need a better answer for "how many reduces?"
> 
>
> Key: MAPREDUCE-1206
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1206
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>
> http://hadoop.apache.org/common/docs/current/mapred_tutorial.html#Reducer
> --snip--
> How Many Reduces?
> The right number of reduces seems to be 0.95 or 1.75 multiplied by ( nodes> * mapred.tasktracker.reduce.tasks.maximum).
> --snip--
> Sure, if you only ever run one job on your grid.  There should really be a 
> better answer here, especially explaining what the impact is of a high/low 
> number when chaining multiple map/reduce jobs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1391) mapred.jobtracker.restart.recover should be true by default

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1391.
-

Resolution: Won't Fix

Job recovery is broken so this will likely never be the default till that 
happens.

> mapred.jobtracker.restart.recover should be true by default
> ---
>
> Key: MAPREDUCE-1391
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1391
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>
> I haven't played with it much (about to), but is there a reason why jt 
> recover is false by default?  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1552) TaskTracker should report which fs during error

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1552.
-

Resolution: Won't Fix

> TaskTracker should report which fs during error
> ---
>
> Key: MAPREDUCE-1552
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1552
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Affects Versions: 0.20.2
>Reporter: Allen Wittenauer
>
> We run with ZFS with fs quotas for the mapred spill space to prevent it 
> over-running the HDFS space.  During merge, we some times end up running out 
> of space.  It would be useful if the stack trace (see below) included which 
> file system the errors actually came from.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1450) task logs should specify user vs. system death

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1450.
-

Resolution: Won't Fix

> task logs should specify user vs. system death
> --
>
> Key: MAPREDUCE-1450
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1450
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Allen Wittenauer
>
> When looking at task attempt logs, it should specify whether the task was 
> killed by Hadoop or by the user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1665) kill and modify should not be the same acl

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1665.
-

Resolution: Won't Fix

We don't believe in granular control.

> kill and modify should not be the same acl
> --
>
> Key: MAPREDUCE-1665
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1665
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.21.0, 0.22.0
>Reporter: Allen Wittenauer
>
> The permission to kill a job/task should be split out from modification.  
> There are definitely instances where someone who can kill a job should not be 
> able to modify it.  [Third person job monitoring, for example, such as we 
> have here at LinkedIn.]  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1679) capacity scheduler's user-limit documentation is not helpful

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1679.
-

Resolution: Won't Fix

No one looks at Hadoop's documentation so this won't get fixed.

> capacity scheduler's user-limit documentation is not helpful
> 
>
> Key: MAPREDUCE-1679
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1679
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: contrib/capacity-sched
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
>Reporter: Allen Wittenauer
>Priority: Trivial
>
> The example given for the user limit tunable doesn't actually show how that 
> value comes into play.  With 4 users, the Max() is 25 for both the user limit 
> and the capacity limit (from my reading of the source).  Either pushing the 
> example to 5 users or raising the user limit to something higher than 25 
> would help a great deal.  Also, presenting this info in tabular format 
> showing how the max() value is in play would also be great.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1755) Zombie tasks kept alive by logging system

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1755.
-

Resolution: Won't Fix

It is easier to just pkill -9 java tasks at regular intervals than fix this.

> Zombie tasks kept alive by logging system
> -
>
> Key: MAPREDUCE-1755
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1755
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 0.20.2
>Reporter: Allen Wittenauer
> Attachments: jstack.txt, stderr.txt, syslog.txt, tightloop.txt
>
>
> I'm currently looking at a task that, as far as the task tracker is 
> concerned, is dead.  Like long long long ago dead.  It was a failed task that 
> ran out of heap.  Rather than just kill it, I thought I would see what it was 
> doing, since it was clearly using system resources.  It would appear the 
> system is trying to log but failing.  I'm guessing we're missing an error 
> condition and not doing the appropriate thing. See the comments for more.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1815) Directory in logs/history causes ArrayIndexOutOfBoundsException

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1815.
-

Resolution: Won't Fix

> Directory in logs/history causes ArrayIndexOutOfBoundsException
> ---
>
> Key: MAPREDUCE-1815
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1815
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobtracker
>Affects Versions: 0.20.2
>Reporter: Allen Wittenauer
>
> Creating a directory in the jobtracker history directory causes an 
> ArrayIndexOutOfBounds exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1869) TaskTracker hangs due to Java's usage of fork() being MT-unsafe

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1869.
-

Resolution: Won't Fix

> TaskTracker hangs due to Java's usage of fork() being MT-unsafe
> ---
>
> Key: MAPREDUCE-1869
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1869
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: tasktracker
>Affects Versions: 0.20.2
> Environment: Solaris 10 Update 7
> Java 1.6.0_14
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: jstack.txt, pstack.txt
>
>
> A TaskTracker process on our grid appears to be locked up and not sending 
> heartbeats to the namenode.  Attaching jstack and pstack output.  Even tho 
> the hangs appear to be in LocalDirAllocator, the local file system seems to 
> be a-ok.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1847) capacity scheduler job tasks summaries are wrong if nodes fail

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1847.
-

Resolution: Won't Fix

> capacity scheduler job tasks summaries are wrong if nodes fail
> --
>
> Key: MAPREDUCE-1847
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1847
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: contrib/capacity-sched
>Reporter: Allen Wittenauer
>Priority: Minor
>
> The Job Scheduling Information the web UI is needs to be re-computed in case 
> nodes fail.  Otherwise it will report tasks are running that are not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1884) Remove/deprecate mapred.map.tasks tunable

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1884.
-

Resolution: Won't Fix

> Remove/deprecate mapred.map.tasks tunable
> -
>
> Key: MAPREDUCE-1884
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1884
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>
> Considering it isn't used for much that I know, is there any reason to keep 
> the mapred.map.tasks tunable hanging around?  If not, let's remove it from 
> the documentation, xml files, etc.  All it does is generate user confusion 
> when it doesn't work.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-1998) Size-based queuing for capacity scheduler

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-1998.
-

Resolution: Won't Fix

> Size-based queuing for capacity scheduler
> -
>
> Key: MAPREDUCE-1998
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1998
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: contrib/capacity-sched
>Reporter: Allen Wittenauer
>Assignee: Krishna Ramachandran
>
> On job submission, it would be useful if the capacity scheduler could pick a 
> queue based on the # of maps and reduces.  This way one could have queues 
> based on job-size without users having to pick the queue prior to submission. 
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-2441) regression: maximum limit of -1 + user-lmit math appears to be off

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-2441.
-

Resolution: Won't Fix

> regression: maximum limit of -1 + user-lmit math appears to be off
> --
>
> Key: MAPREDUCE-2441
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2441
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: contrib/capacity-sched
>Affects Versions: 0.20.203.0
>Reporter: Allen Wittenauer
>Priority: Critical
> Attachments: capsched.xml
>
>
> The math around the slot usage when maximum-capacity=-1 appears to be faulty. 
>  See comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-2028) streaming should support MultiFileInputFormat

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-2028.
-

Resolution: Won't Fix

core devs don't use streaming so this won't get fixed.

> streaming should support MultiFileInputFormat
> -
>
> Key: MAPREDUCE-2028
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2028
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: contrib/streaming
>Affects Versions: 0.20.2
>Reporter: Allen Wittenauer
>
> There should be a way to call MultiFileInputFormat from streaming without 
> having to write Java code...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-2017) Move jobs between queues post-job submit

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-2017.
-

Resolution: Won't Fix

> Move jobs between queues post-job submit
> 
>
> Key: MAPREDUCE-2017
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2017
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: contrib/capacity-sched
>Affects Versions: 0.21.0
>Reporter: Allen Wittenauer
>
> It would be massively useful to be able to move a job between queues after it 
> has already been submitted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-2507) vaidya script uses the wrong path for hadoop-core due to jar renaming

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-2507.
-

Resolution: Won't Fix

No one uses this so who cares.

> vaidya script uses the wrong path for hadoop-core due to jar renaming
> -
>
> Key: MAPREDUCE-2507
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2507
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: contrib/vaidya
>Affects Versions: 0.20.203.0
>Reporter: Allen Wittenauer
>Priority: Trivial
>
> Another fallout of the incompatible jar renaming.  I sure hope Maven was 
> worth it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-2508) vaidya script uses the wrong path for vaidya jar due to jar renaming

2011-11-02 Thread Allen Wittenauer (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved MAPREDUCE-2508.
-

Resolution: Won't Fix

No one uses this so who cares.  Closing.

> vaidya script uses the wrong path for vaidya jar due to jar renaming
> 
>
> Key: MAPREDUCE-2508
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2508
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: contrib/vaidya
>Reporter: Allen Wittenauer
>Priority: Trivial
>
> This clearly wasn't tested in 203.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3334) TaskRunner should log its activities

2011-11-02 Thread Allen Wittenauer (Created) (JIRA)
TaskRunner should log its activities


 Key: MAPREDUCE-3334
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3334
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Affects Versions: 0.20.2
Reporter: Allen Wittenauer
Priority: Minor


TaskRunner has little to no information that it logs, making it impossible to 
debug when something goes wrong.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3325) Improvements to CapacityScheduler doc

2011-11-02 Thread Thomas Graves (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142313#comment-13142313
 ] 

Thomas Graves commented on MAPREDUCE-3325:
--

documentation changes only - tests don't apply.

> Improvements to CapacityScheduler doc
> -
>
> Key: MAPREDUCE-3325
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3325
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
> Attachments: MAPREDUCE-3325.patch
>
>
> I noticed the following issues with the capacity scheduler doc: 
> ./hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
> - In overview section, 3rd paragraph,  sentence "There is an added benefit 
> that an organization can access any excess capacity no being used by others". 
>  No should be not. 
> - in overview section, 4th paragraph. dispropotionate misspelled 
> - in features section, under multitenancy - monopolizing is misspelled. 
> - in features section, under operability - it doesn't say if you can delete 
> queues at runtime?  I see there is a note at the end but perhaps that can be 
> added into the other sections to since its easy to miss that Note at the very 
> end. 
> - in features section - hierarchy and Hierarchical mispelled. 
> - under configuration section the class to turn on to use capacity scheduler 
> should be: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>  
> - section on setting up queues, 4th sentence - hierarchy misspelled as 
> heirarcy  and heirarchy. 
> - I think specifying how a user has to specify the queue when running a 
> job/app would be useful information.  Especially with the new hierarchical 
> queues.  Does the user have to specify the entire path like a.b.c or can they 
> just specify c. 
> - under "Running and Pending Application Limits" section, property 
> "yarn.scheduler.capacity.maximum-applications", they are referred to them as 
> jobs, I believe that should be applications. 
> - misspelled concurrently as concurently in same section of 
> maximum-applications. 
> - I think it should specify the defaults (if any) for the config vars.   Also 
> what format are they specified in - int, float,etc? 
> - might be nice to say it doesn't support preemption. 
> - under admin options yarn.scheduler.capacity..state - queues 
> misspelled as queueus 
> - under changing queue configuration it should have "yarn" in front of the 
> "rmadmin -refreshQueues". Similarly a few lines down at 
> "$YARN_HOME/bin/rmadmin -refreshQueues"

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3325) Improvements to CapacityScheduler doc

2011-11-02 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142300#comment-13142300
 ] 

Hadoop QA commented on MAPREDUCE-3325:
--

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12501980/MAPREDUCE-3325.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1241//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1241//console

This message is automatically generated.

> Improvements to CapacityScheduler doc
> -
>
> Key: MAPREDUCE-3325
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3325
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
> Attachments: MAPREDUCE-3325.patch
>
>
> I noticed the following issues with the capacity scheduler doc: 
> ./hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
> - In overview section, 3rd paragraph,  sentence "There is an added benefit 
> that an organization can access any excess capacity no being used by others". 
>  No should be not. 
> - in overview section, 4th paragraph. dispropotionate misspelled 
> - in features section, under multitenancy - monopolizing is misspelled. 
> - in features section, under operability - it doesn't say if you can delete 
> queues at runtime?  I see there is a note at the end but perhaps that can be 
> added into the other sections to since its easy to miss that Note at the very 
> end. 
> - in features section - hierarchy and Hierarchical mispelled. 
> - under configuration section the class to turn on to use capacity scheduler 
> should be: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>  
> - section on setting up queues, 4th sentence - hierarchy misspelled as 
> heirarcy  and heirarchy. 
> - I think specifying how a user has to specify the queue when running a 
> job/app would be useful information.  Especially with the new hierarchical 
> queues.  Does the user have to specify the entire path like a.b.c or can they 
> just specify c. 
> - under "Running and Pending Application Limits" section, property 
> "yarn.scheduler.capacity.maximum-applications", they are referred to them as 
> jobs, I believe that should be applications. 
> - misspelled concurrently as concurently in same section of 
> maximum-applications. 
> - I think it should specify the defaults (if any) for the config vars.   Also 
> what format are they specified in - int, float,etc? 
> - might be nice to say it doesn't support preemption. 
> - under admin options yarn.scheduler.capacity..state - queues 
> misspelled as queueus 
> - under changing queue configuration it should have "yarn" in front of the 
> "rmadmin -refreshQueues". Similarly a few lines down at 
> "$YARN_HOME/bin/rmadmin -refreshQueues"

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3325) Improvements to CapacityScheduler doc

2011-11-02 Thread Thomas Graves (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated MAPREDUCE-3325:
-

Release Note: document changes only.
  Status: Patch Available  (was: Open)

> Improvements to CapacityScheduler doc
> -
>
> Key: MAPREDUCE-3325
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3325
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
> Attachments: MAPREDUCE-3325.patch
>
>
> I noticed the following issues with the capacity scheduler doc: 
> ./hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
> - In overview section, 3rd paragraph,  sentence "There is an added benefit 
> that an organization can access any excess capacity no being used by others". 
>  No should be not. 
> - in overview section, 4th paragraph. dispropotionate misspelled 
> - in features section, under multitenancy - monopolizing is misspelled. 
> - in features section, under operability - it doesn't say if you can delete 
> queues at runtime?  I see there is a note at the end but perhaps that can be 
> added into the other sections to since its easy to miss that Note at the very 
> end. 
> - in features section - hierarchy and Hierarchical mispelled. 
> - under configuration section the class to turn on to use capacity scheduler 
> should be: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>  
> - section on setting up queues, 4th sentence - hierarchy misspelled as 
> heirarcy  and heirarchy. 
> - I think specifying how a user has to specify the queue when running a 
> job/app would be useful information.  Especially with the new hierarchical 
> queues.  Does the user have to specify the entire path like a.b.c or can they 
> just specify c. 
> - under "Running and Pending Application Limits" section, property 
> "yarn.scheduler.capacity.maximum-applications", they are referred to them as 
> jobs, I believe that should be applications. 
> - misspelled concurrently as concurently in same section of 
> maximum-applications. 
> - I think it should specify the defaults (if any) for the config vars.   Also 
> what format are they specified in - int, float,etc? 
> - might be nice to say it doesn't support preemption. 
> - under admin options yarn.scheduler.capacity..state - queues 
> misspelled as queueus 
> - under changing queue configuration it should have "yarn" in front of the 
> "rmadmin -refreshQueues". Similarly a few lines down at 
> "$YARN_HOME/bin/rmadmin -refreshQueues"

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3325) Improvements to CapacityScheduler doc

2011-11-02 Thread Thomas Graves (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated MAPREDUCE-3325:
-

Attachment: MAPREDUCE-3325.patch

> Improvements to CapacityScheduler doc
> -
>
> Key: MAPREDUCE-3325
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3325
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Thomas Graves
>Assignee: Thomas Graves
> Attachments: MAPREDUCE-3325.patch
>
>
> I noticed the following issues with the capacity scheduler doc: 
> ./hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
> - In overview section, 3rd paragraph,  sentence "There is an added benefit 
> that an organization can access any excess capacity no being used by others". 
>  No should be not. 
> - in overview section, 4th paragraph. dispropotionate misspelled 
> - in features section, under multitenancy - monopolizing is misspelled. 
> - in features section, under operability - it doesn't say if you can delete 
> queues at runtime?  I see there is a note at the end but perhaps that can be 
> added into the other sections to since its easy to miss that Note at the very 
> end. 
> - in features section - hierarchy and Hierarchical mispelled. 
> - under configuration section the class to turn on to use capacity scheduler 
> should be: 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
>  
> - section on setting up queues, 4th sentence - hierarchy misspelled as 
> heirarcy  and heirarchy. 
> - I think specifying how a user has to specify the queue when running a 
> job/app would be useful information.  Especially with the new hierarchical 
> queues.  Does the user have to specify the entire path like a.b.c or can they 
> just specify c. 
> - under "Running and Pending Application Limits" section, property 
> "yarn.scheduler.capacity.maximum-applications", they are referred to them as 
> jobs, I believe that should be applications. 
> - misspelled concurrently as concurently in same section of 
> maximum-applications. 
> - I think it should specify the defaults (if any) for the config vars.   Also 
> what format are they specified in - int, float,etc? 
> - might be nice to say it doesn't support preemption. 
> - under admin options yarn.scheduler.capacity..state - queues 
> misspelled as queueus 
> - under changing queue configuration it should have "yarn" in front of the 
> "rmadmin -refreshQueues". Similarly a few lines down at 
> "$YARN_HOME/bin/rmadmin -refreshQueues"

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (MAPREDUCE-3327) RM web ui scheduler link doesn't show correct max value for queues

2011-11-02 Thread Anupam Seth (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anupam Seth reassigned MAPREDUCE-3327:
--

Assignee: Anupam Seth

> RM web ui scheduler link doesn't show correct max value for queues
> --
>
> Key: MAPREDUCE-3327
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3327
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Thomas Graves
>Assignee: Anupam Seth
>Priority: Critical
>
> Configure a cluster to use the capacity scheduler and then specifying a 
> maximum-capacity < 100% for a queue.  If you go to the RM Web UI and hover 
> over the queue, it always shows the max at 100%.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3333) MR AM for sort-job going out of memory

2011-11-02 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142234#comment-13142234
 ] 

Hadoop QA commented on MAPREDUCE-:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12501972/MAPREDUCE-3333-20111102.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1240//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1240//console

This message is automatically generated.

> MR AM for sort-job going out of memory
> --
>
> Key: MAPREDUCE-
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Attachments: MAPREDUCE--2002.txt
>
>
> [~Karams] just found this. The usual sort job on a 350 node cluster hung due 
> to OutOfMemory and eventually failed after an hour instead of the usual odd 
> 20 minutes.
> {code}
> 2011-11-02 11:40:36,438 ERROR [ContainerLauncher #258] 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Container 
> launch failed for container_1320233407485_0002
> _01_001434 : java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:88)
> at 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:290)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: com.google.protobuf.ServiceException: java.io.IOException: Failed 
> on local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; 
> destination host is: ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:139)
> at $Proxy20.startContainer(Unknown Source)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:81)
> ... 4 more
> Caused by: java.io.IOException: Failed on local exception: 
> java.io.IOException: Couldn't set up IO streams; Host Details : local host 
> is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; destination host is: 
> ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:655)
> at org.apache.hadoop.ipc.Client.call(Client.java:1089)
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:136)
> ... 6 more
> Caused by: java.io.IOException: Couldn't set up IO streams
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:621)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:205)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1195)
> at org.apache.hadoop.ipc.Client.call(Client.java:1065)
> ... 7 more
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:597)
> at 
> 

[jira] [Updated] (MAPREDUCE-3333) MR AM for sort-job going out of memory

2011-11-02 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-:
---

Status: Patch Available  (was: Open)

> MR AM for sort-job going out of memory
> --
>
> Key: MAPREDUCE-
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
>     Attachments: MAPREDUCE--2002.txt
>
>
> [~Karams] just found this. The usual sort job on a 350 node cluster hung due 
> to OutOfMemory and eventually failed after an hour instead of the usual odd 
> 20 minutes.
> {code}
> 2011-11-02 11:40:36,438 ERROR [ContainerLauncher #258] 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Container 
> launch failed for container_1320233407485_0002
> _01_001434 : java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:88)
> at 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:290)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: com.google.protobuf.ServiceException: java.io.IOException: Failed 
> on local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; 
> destination host is: ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:139)
> at $Proxy20.startContainer(Unknown Source)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:81)
> ... 4 more
> Caused by: java.io.IOException: Failed on local exception: 
> java.io.IOException: Couldn't set up IO streams; Host Details : local host 
> is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; destination host is: 
> ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:655)
> at org.apache.hadoop.ipc.Client.call(Client.java:1089)
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:136)
> ... 6 more
> Caused by: java.io.IOException: Couldn't set up IO streams
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:621)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:205)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1195)
> at org.apache.hadoop.ipc.Client.call(Client.java:1065)
> ... 7 more
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:597)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:614)
> ... 10 more
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3333) MR AM for sort-job going out of memory

2011-11-02 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-:
---

Attachment: MAPREDUCE--2002.txt

The exception trace gave it away. It is not the pool of threads, but the RPC 
layer itself. For each client, RPC layer creates a thread for 
connection/communication etc. With MAPREDUCE-3256, we need one client per 
container because of per-container token. So, the number or RPC level threads 
blows up and you know the rest of the story.

Attaching patch. Taking Karam's help for testing.

> MR AM for sort-job going out of memory
> --
>
> Key: MAPREDUCE-
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
> Attachments: MAPREDUCE--2002.txt
>
>
> [~Karams] just found this. The usual sort job on a 350 node cluster hung due 
> to OutOfMemory and eventually failed after an hour instead of the usual odd 
> 20 minutes.
> {code}
> 2011-11-02 11:40:36,438 ERROR [ContainerLauncher #258] 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Container 
> launch failed for container_1320233407485_0002
> _01_001434 : java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:88)
> at 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:290)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: com.google.protobuf.ServiceException: java.io.IOException: Failed 
> on local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; 
> destination host is: ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:139)
> at $Proxy20.startContainer(Unknown Source)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:81)
> ... 4 more
> Caused by: java.io.IOException: Failed on local exception: 
> java.io.IOException: Couldn't set up IO streams; Host Details : local host 
> is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; destination host is: 
> ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:655)
> at org.apache.hadoop.ipc.Client.call(Client.java:1089)
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:136)
> ... 6 more
> Caused by: java.io.IOException: Couldn't set up IO streams
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:621)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:205)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1195)
> at org.apache.hadoop.ipc.Client.call(Client.java:1065)
> ... 7 more
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:597)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:614)
> ... 10 more
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3333) MR AM for sort-job going out of memory

2011-11-02 Thread Vinod Kumar Vavilapalli (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142170#comment-13142170
 ] 

Vinod Kumar Vavilapalli commented on MAPREDUCE-:


Actually the code does look right, it creates only one thread per node. This is 
deeper than my first suspicion, still debugging.

> MR AM for sort-job going out of memory
> --
>
> Key: MAPREDUCE-
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
>
> [~Karams] just found this. The usual sort job on a 350 node cluster hung due 
> to OutOfMemory and eventually failed after an hour instead of the usual odd 
> 20 minutes.
> {code}
> 2011-11-02 11:40:36,438 ERROR [ContainerLauncher #258] 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Container 
> launch failed for container_1320233407485_0002
> _01_001434 : java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:88)
> at 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:290)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: com.google.protobuf.ServiceException: java.io.IOException: Failed 
> on local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; 
> destination host is: ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:139)
> at $Proxy20.startContainer(Unknown Source)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:81)
> ... 4 more
> Caused by: java.io.IOException: Failed on local exception: 
> java.io.IOException: Couldn't set up IO streams; Host Details : local host 
> is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; destination host is: 
> ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:655)
> at org.apache.hadoop.ipc.Client.call(Client.java:1089)
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:136)
> ... 6 more
> Caused by: java.io.IOException: Couldn't set up IO streams
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:621)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:205)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1195)
> at org.apache.hadoop.ipc.Client.call(Client.java:1065)
> ... 7 more
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:597)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:614)
> ... 10 more
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (MAPREDUCE-3333) MR AM for sort-job going out of memory

2011-11-02 Thread Vinod Kumar Vavilapalli (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reassigned MAPREDUCE-:
--

Assignee: Vinod Kumar Vavilapalli

> MR AM for sort-job going out of memory
> --
>
> Key: MAPREDUCE-
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>Priority: Blocker
>
> [~Karams] just found this. The usual sort job on a 350 node cluster hung due 
> to OutOfMemory and eventually failed after an hour instead of the usual odd 
> 20 minutes.
> {code}
> 2011-11-02 11:40:36,438 ERROR [ContainerLauncher #258] 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Container 
> launch failed for container_1320233407485_0002
> _01_001434 : java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:88)
> at 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:290)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: com.google.protobuf.ServiceException: java.io.IOException: Failed 
> on local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; 
> destination host is: ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:139)
> at $Proxy20.startContainer(Unknown Source)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:81)
> ... 4 more
> Caused by: java.io.IOException: Failed on local exception: 
> java.io.IOException: Couldn't set up IO streams; Host Details : local host 
> is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; destination host is: 
> ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:655)
> at org.apache.hadoop.ipc.Client.call(Client.java:1089)
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:136)
> ... 6 more
> Caused by: java.io.IOException: Couldn't set up IO streams
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:621)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:205)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1195)
> at org.apache.hadoop.ipc.Client.call(Client.java:1065)
> ... 7 more
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:597)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:614)
> ... 10 more
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3333) MR AM for sort-job going out of memory

2011-11-02 Thread Vinod Kumar Vavilapalli (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142162#comment-13142162
 ] 

Vinod Kumar Vavilapalli commented on MAPREDUCE-:


It wasn't so hard to track this down, given one of my earlier patches causes 
this - MAPREDUCE-3256.

My mistake. AM now tries to create one thread per container instead of the 
earlier and the correct behaviour of one thread per node.

> MR AM for sort-job going out of memory
> --
>
> Key: MAPREDUCE-
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
>Priority: Blocker
>
> [~Karams] just found this. The usual sort job on a 350 node cluster hung due 
> to OutOfMemory and eventually failed after an hour instead of the usual odd 
> 20 minutes.
> {code}
> 2011-11-02 11:40:36,438 ERROR [ContainerLauncher #258] 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Container 
> launch failed for container_1320233407485_0002
> _01_001434 : java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:88)
> at 
> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:290)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> Caused by: com.google.protobuf.ServiceException: java.io.IOException: Failed 
> on local exception: java.io.IOException: Couldn't set up IO streams; Host 
> Details : local host is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; 
> destination host is: ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:139)
> at $Proxy20.startContainer(Unknown Source)
> at 
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:81)
> ... 4 more
> Caused by: java.io.IOException: Failed on local exception: 
> java.io.IOException: Couldn't set up IO streams; Host Details : local host 
> is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; destination host is: 
> ""gsbl91525.blue.ygrid.yahoo.com":45450; 
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:655)
> at org.apache.hadoop.ipc.Client.call(Client.java:1089)
> at 
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:136)
> ... 6 more
> Caused by: java.io.IOException: Couldn't set up IO streams
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:621)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:205)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1195)
> at org.apache.hadoop.ipc.Client.call(Client.java:1065)
> ... 7 more
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:597)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:614)
> ... 10 more
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (MAPREDUCE-3333) MR AM for sort-job going out of memory

2011-11-02 Thread Vinod Kumar Vavilapalli (Created) (JIRA)
MR AM for sort-job going out of memory
--

 Key: MAPREDUCE-
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: applicationmaster, mrv2
Affects Versions: 0.23.0
Reporter: Vinod Kumar Vavilapalli
Priority: Blocker


[~Karams] just found this. The usual sort job on a 350 node cluster hung due to 
OutOfMemory and eventually failed after an hour instead of the usual odd 20 
minutes.
{code}
2011-11-02 11:40:36,438 ERROR [ContainerLauncher #258] 
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Container 
launch failed for container_1320233407485_0002
_01_001434 : java.lang.reflect.UndeclaredThrowableException
at 
org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:88)
at 
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:290)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: com.google.protobuf.ServiceException: java.io.IOException: Failed on 
local exception: java.io.IOException: Couldn't set up IO streams; Host Details 
: local host is: "gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; destination 
host is: ""gsbl91525.blue.ygrid.yahoo.com":45450; 
at 
org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:139)
at $Proxy20.startContainer(Unknown Source)
at 
org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:81)
... 4 more
Caused by: java.io.IOException: Failed on local exception: java.io.IOException: 
Couldn't set up IO streams; Host Details : local host is: 
"gsbl91281.blue.ygrid.yahoo.com/98.137.101.189"; destination host is: 
""gsbl91525.blue.ygrid.yahoo.com":45450; 
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:655)
at org.apache.hadoop.ipc.Client.call(Client.java:1089)
at 
org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:136)
... 6 more
Caused by: java.io.IOException: Couldn't set up IO streams
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:621)
at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:205)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1195)
at org.apache.hadoop.ipc.Client.call(Client.java:1065)
... 7 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:597)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:614)
... 10 more
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3332) contrib/raid compile breaks due to changes in hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142145#comment-13142145
 ] 

Hudson commented on MAPREDUCE-3332:
---

Integrated in Hadoop-Mapreduce-trunk #885 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/885/])
MAPREDUCE-3332. contrib/raid compile breaks due to changes in 
hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling 
(Hitesh Shah via mahadev)

mahadev : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196356
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java


> contrib/raid compile breaks due to changes in 
> hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling 
> 
>
> Key: MAPREDUCE-3332
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3332
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: contrib/raid
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Trivial
> Fix For: 0.23.0
>
> Attachments: MR-3332.1.patch
>
>
> [javac] 
> /Users/Hitesh/dev/hadoop-common/hadoop-mapreduce-project/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java:783:
>  
> writeBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock,org.apache.hadoop.security.token.Token,java.lang.String,org.apache.hadoop.hdfs.protocol.DatanodeInfo[],org.apache.hadoop.hdfs.protocol.DatanodeInfo,org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage,int,long,long,long,org.apache.hadoop.util.DataChecksum)
>  in org.apache.hadoop.hdfs.protocol.datatransfer.Sender cannot be applied to 
> (org.apache.hadoop.hdfs.protocol.ExtendedBlock,org.apache.hadoop.security.token.Token,java.lang.String,org.apache.hadoop.hdfs.protocol.DatanodeInfo[],,org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage,int,long,long,long)
> [javac] new Sender(out).writeBlock(block.getBlock(), 
> block.getBlockToken(), "",
> [javac]^

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3332) contrib/raid compile breaks due to changes in hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142133#comment-13142133
 ] 

Hudson commented on MAPREDUCE-3332:
---

Integrated in Hadoop-Mapreduce-0.23-Build #78 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/78/])
MAPREDUCE-3332. contrib/raid compile breaks due to changes in 
hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling 
(Hitesh Shah via mahadev) -Merging r1196356 from trunk

mahadev : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196366
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java


> contrib/raid compile breaks due to changes in 
> hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling 
> 
>
> Key: MAPREDUCE-3332
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3332
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: contrib/raid
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Trivial
> Fix For: 0.23.0
>
> Attachments: MR-3332.1.patch
>
>
> [javac] 
> /Users/Hitesh/dev/hadoop-common/hadoop-mapreduce-project/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java:783:
>  
> writeBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock,org.apache.hadoop.security.token.Token,java.lang.String,org.apache.hadoop.hdfs.protocol.DatanodeInfo[],org.apache.hadoop.hdfs.protocol.DatanodeInfo,org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage,int,long,long,long,org.apache.hadoop.util.DataChecksum)
>  in org.apache.hadoop.hdfs.protocol.datatransfer.Sender cannot be applied to 
> (org.apache.hadoop.hdfs.protocol.ExtendedBlock,org.apache.hadoop.security.token.Token,java.lang.String,org.apache.hadoop.hdfs.protocol.DatanodeInfo[],,org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage,int,long,long,long)
> [javac] new Sender(out).writeBlock(block.getBlock(), 
> block.getBlockToken(), "",
> [javac]^

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3332) contrib/raid compile breaks due to changes in hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142116#comment-13142116
 ] 

Hudson commented on MAPREDUCE-3332:
---

Integrated in Hadoop-Hdfs-trunk #851 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/851/])
MAPREDUCE-3332. contrib/raid compile breaks due to changes in 
hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling 
(Hitesh Shah via mahadev)

mahadev : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196356
Files : 
* /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java


> contrib/raid compile breaks due to changes in 
> hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling 
> 
>
> Key: MAPREDUCE-3332
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3332
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: contrib/raid
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Trivial
> Fix For: 0.23.0
>
> Attachments: MR-3332.1.patch
>
>
> [javac] 
> /Users/Hitesh/dev/hadoop-common/hadoop-mapreduce-project/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java:783:
>  
> writeBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock,org.apache.hadoop.security.token.Token,java.lang.String,org.apache.hadoop.hdfs.protocol.DatanodeInfo[],org.apache.hadoop.hdfs.protocol.DatanodeInfo,org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage,int,long,long,long,org.apache.hadoop.util.DataChecksum)
>  in org.apache.hadoop.hdfs.protocol.datatransfer.Sender cannot be applied to 
> (org.apache.hadoop.hdfs.protocol.ExtendedBlock,org.apache.hadoop.security.token.Token,java.lang.String,org.apache.hadoop.hdfs.protocol.DatanodeInfo[],,org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage,int,long,long,long)
> [javac] new Sender(out).writeBlock(block.getBlock(), 
> block.getBlockToken(), "",
> [javac]^

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3324) Not All HttpServer tools links (stacks,logs,config,metrics) are accessible through all UI servers

2011-11-02 Thread jirapos...@reviews.apache.org (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142109#comment-13142109
 ] 

jirapos...@reviews.apache.org commented on MAPREDUCE-3324:
--


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/2665/#review3011
---

Ship it!


Looks fine as is

- Mark


On 2011-11-01 21:15:14, Jonathan Eagles wrote:
bq.  
bq.  ---
bq.  This is an automatically generated e-mail. To reply, visit:
bq.  https://reviews.apache.org/r/2665/
bq.  ---
bq.  
bq.  (Updated 2011-11-01 21:15:14)
bq.  
bq.  
bq.  Review request for Tom Graves, Robert Evans and Mark Holderbaugh.
bq.  
bq.  
bq.  Summary
bq.  ---
bq.  
bq.  Nodemanager has no tools listed under tools UI.
bq.  Jobhistory server has no logs tool listed under tools UI.
bq.  
bq.  
bq.  This addresses bug MAPREDUCE-3324.
bq.  http://issues.apache.org/jira/browse/MAPREDUCE-3324
bq.  
bq.  
bq.  Diffs
bq.  -
bq.  
bq.
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/webapp/HsNavBlock.java
 8d3ccff 
bq.
hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/NavBlock.java
 01ea4aa 
bq.  
bq.  Diff: https://reviews.apache.org/r/2665/diff
bq.  
bq.  
bq.  Testing
bq.  ---
bq.  
bq.  Manually verified that Tools navigation bar on NM and Job History contain 
stacks,logs,config,metrics
bq.  
bq.  
bq.  Thanks,
bq.  
bq.  Jonathan
bq.  
bq.



> Not All HttpServer tools links (stacks,logs,config,metrics) are accessible 
> through all UI servers
> -
>
> Key: MAPREDUCE-3324
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3324
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver, mrv2, nodemanager
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Critical
> Attachments: MAPREDUCE-3324.patch
>
>
> Nodemanager has no tools listed under tools UI.
> Jobhistory server has no logs tool listed under tools UI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3332) contrib/raid compile breaks due to changes in hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling

2011-11-02 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142103#comment-13142103
 ] 

Hudson commented on MAPREDUCE-3332:
---

Integrated in Hadoop-Hdfs-0.23-Build #64 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/64/])
MAPREDUCE-3332. contrib/raid compile breaks due to changes in 
hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling 
(Hitesh Shah via mahadev) -Merging r1196356 from trunk

mahadev : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1196366
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java


> contrib/raid compile breaks due to changes in 
> hdfs/protocol/datatransfer/Sender#writeBlock related to checksum handling 
> 
>
> Key: MAPREDUCE-3332
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3332
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: contrib/raid
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Trivial
> Fix For: 0.23.0
>
> Attachments: MR-3332.1.patch
>
>
> [javac] 
> /Users/Hitesh/dev/hadoop-common/hadoop-mapreduce-project/src/contrib/raid/src/java/org/apache/hadoop/raid/BlockFixer.java:783:
>  
> writeBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock,org.apache.hadoop.security.token.Token,java.lang.String,org.apache.hadoop.hdfs.protocol.DatanodeInfo[],org.apache.hadoop.hdfs.protocol.DatanodeInfo,org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage,int,long,long,long,org.apache.hadoop.util.DataChecksum)
>  in org.apache.hadoop.hdfs.protocol.datatransfer.Sender cannot be applied to 
> (org.apache.hadoop.hdfs.protocol.ExtendedBlock,org.apache.hadoop.security.token.Token,java.lang.String,org.apache.hadoop.hdfs.protocol.DatanodeInfo[],,org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage,int,long,long,long)
> [javac] new Sender(out).writeBlock(block.getBlock(), 
> block.getBlockToken(), "",
> [javac]^

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3131) Docs and Scripts for setting up single node MRV2 cluster.

2011-11-02 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142088#comment-13142088
 ] 

Hadoop QA commented on MAPREDUCE-3131:
--

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12501949/MAPREDUCE-3131.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+0 tests included.  The patch appears to be a documentation patch that 
doesn't require tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1239//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1239//console

This message is automatically generated.

> Docs and Scripts for setting up single node MRV2 cluster. 
> --
>
> Key: MAPREDUCE-3131
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3131
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: documentation, mrv2
>Affects Versions: 0.24.0
>Reporter: Prashant Sharma
>Assignee: Prashant Sharma
>Priority: Trivial
>  Labels: documentation, hadoop
> Fix For: 0.24.0
>
> Attachments: MAPREDUCE-3131.patch, MAPREDUCE-3131.patch, 
> MAPREDUCE-3131.patch, MAPREDUCE-3131.patch
>
>   Original Estimate: 168h
>  Time Spent: 96h
>  Remaining Estimate: 72h
>
> Scripts to run a single node cluster with a default configuration. Takes care 
> of running all the daemons including hdfs and yarn. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (MAPREDUCE-3280) MR AM should not read the username from configuration

2011-11-02 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13142084#comment-13142084
 ] 

Hadoop QA commented on MAPREDUCE-3280:
--

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12501948/MAPREDUCE-3280-20111102.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1238//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/1238//console

This message is automatically generated.

> MR AM should not read the username from configuration
> -
>
> Key: MAPREDUCE-3280
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3280
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
> Fix For: 0.23.1
>
> Attachments: MAPREDUCE-3280-2002.txt
>
>
> MR AM reads the value for mapreduce.job.user.name from the configuration in 
> several places. It should instead get the app-submitter name from the RM.
> Once that is done, we can remove the default value for 
> mapreduce.job.user.name from mapred-default.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3131) Docs and Scripts for setting up single node MRV2 cluster.

2011-11-02 Thread Prashant Sharma (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prashant Sharma updated MAPREDUCE-3131:
---

Attachment: MAPREDUCE-3131.patch

New Improved. 
Improved documentation for setting up single Node Cluster using scripts. 

Waiting for reviews. 

> Docs and Scripts for setting up single node MRV2 cluster. 
> --
>
> Key: MAPREDUCE-3131
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3131
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: documentation, mrv2
>Affects Versions: 0.24.0
>Reporter: Prashant Sharma
>Assignee: Prashant Sharma
>Priority: Trivial
>  Labels: documentation, hadoop
> Fix For: 0.24.0
>
> Attachments: MAPREDUCE-3131.patch, MAPREDUCE-3131.patch, 
> MAPREDUCE-3131.patch, MAPREDUCE-3131.patch
>
>   Original Estimate: 168h
>  Time Spent: 96h
>  Remaining Estimate: 72h
>
> Scripts to run a single node cluster with a default configuration. Takes care 
> of running all the daemons including hdfs and yarn. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3280) MR AM should not read the username from configuration

2011-11-02 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3280:
---

Status: Patch Available  (was: Open)

> MR AM should not read the username from configuration
> -
>
> Key: MAPREDUCE-3280
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3280
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
> Fix For: 0.23.1
>
>     Attachments: MAPREDUCE-3280-2002.txt
>
>
> MR AM reads the value for mapreduce.job.user.name from the configuration in 
> several places. It should instead get the app-submitter name from the RM.
> Once that is done, we can remove the default value for 
> mapreduce.job.user.name from mapred-default.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3280) MR AM should not read the username from configuration

2011-11-02 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3280:
---

Attachment: MAPREDUCE-3280-2002.txt

Save for YarnChild which was depending on the user-name, the code doesn't need 
this. Attaching patch.

> MR AM should not read the username from configuration
> -
>
> Key: MAPREDUCE-3280
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3280
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
> Fix For: 0.23.1
>
> Attachments: MAPREDUCE-3280-2002.txt
>
>
> MR AM reads the value for mapreduce.job.user.name from the configuration in 
> several places. It should instead get the app-submitter name from the RM.
> Once that is done, we can remove the default value for 
> mapreduce.job.user.name from mapred-default.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (MAPREDUCE-3280) MR AM should not read the username from configuration

2011-11-02 Thread Vinod Kumar Vavilapalli (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reassigned MAPREDUCE-3280:
--

Assignee: Vinod Kumar Vavilapalli

> MR AM should not read the username from configuration
> -
>
> Key: MAPREDUCE-3280
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3280
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
> Fix For: 0.23.1
>
>
> MR AM reads the value for mapreduce.job.user.name from the configuration in 
> several places. It should instead get the app-submitter name from the RM.
> Once that is done, we can remove the default value for 
> mapreduce.job.user.name from mapred-default.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (MAPREDUCE-3143) Complete aggregation of user-logs spit out by containers onto DFS

2011-11-02 Thread Vinod Kumar Vavilapalli (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli resolved MAPREDUCE-3143.


   Resolution: Fixed
Fix Version/s: (was: 0.23.1)
   0.23.0

All the pending bugs are fixed. Closing this umbrella ticket. Thanks Sid for 
wrapping this up!

> Complete aggregation of user-logs spit out by containers onto DFS
> -
>
> Key: MAPREDUCE-3143
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3143
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2, nodemanager
>Affects Versions: 0.23.0
>Reporter: Vinod Kumar Vavilapalli
> Fix For: 0.23.0
>
>
> Already implemented the feature for handling user-logs spit out by containers 
> in NodeManager. But the feature is currently disabled due to user-interface 
> issues.
> This is the umbrella ticket for tracking the pending bugs w.r.t putting 
> container-logs on DFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3093) Write additional tests for data locality in MRv2.

2011-11-02 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3093:
---

Component/s: test
 mrv2

> Write additional tests for data locality in MRv2.
> -
>
> Key: MAPREDUCE-3093
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3093
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2, test
>Affects Versions: 0.23.0
>Reporter: Mahadev konar
>Assignee: Mahadev konar
> Fix For: 0.23.1
>
>
> We should add tests to make sure data locality is in place in MRv2 (with 
> respect to the capacity scheduler and also the matching/ask of containers in 
> the MR AM).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3220) ant test TestCombineOutputCollector failing on trunk

2011-11-02 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3220:
---

  Component/s: test
Fix Version/s: (was: 0.23.1)
   0.23.0
 Hadoop Flags: Reviewed

> ant test TestCombineOutputCollector failing on trunk
> 
>
> Key: MAPREDUCE-3220
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3220
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2, test
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Devaraj K
>Priority: Minor
> Fix For: 0.23.0
>
> Attachments: MAPREDUCE-3220.patch
>
>
> Testsuite: org.apache.hadoop.mapred.TestCombineOutputCollector
> Tests run: 2, Failures: 2, Errors: 0, Time elapsed: 1.591 sec
> Testcase: testCustomCollect took 0.363 sec
> FAILED
> taskReporter.progress();
> Never wanted here:
> -> at 
> org.apache.hadoop.mapred.TestCombineOutputCollector.testCustomCollect(TestCombineOutputCollector.java:118)
> But invoked here:
> -> at 
> org.apache.hadoop.mapred.Task$CombineOutputCollector.collect(Task.java:1202)
> junit.framework.AssertionFailedError:
> taskReporter.progress();
> Never wanted here:
> -> at 
> org.apache.hadoop.mapred.TestCombineOutputCollector.testCustomCollect(TestCombineOutputCollector.java:118)
> But invoked here:
> -> at 
> org.apache.hadoop.mapred.Task$CombineOutputCollector.collect(Task.java:1202)
> at 
> org.apache.hadoop.mapred.TestCombineOutputCollector.testCustomCollect(TestCombineOutputCollector.java:118)
> Testcase: testDefaultCollect took 1.211 sec
> FAILED
> taskReporter.progress();
> Wanted 1 time:
> -> at 
> org.apache.hadoop.mapred.TestCombineOutputCollector.testDefaultCollect(TestCombineOutputCollector.java:139)
> But was 1 times. Undesired invocation:
> -> at 
> org.apache.hadoop.mapred.Task$CombineOutputCollector.collect(Task.java:1202)
> junit.framework.AssertionFailedError:
> taskReporter.progress();
> Wanted 1 time:
> -> at 
> org.apache.hadoop.mapred.TestCombineOutputCollector.testDefaultCollect(TestCombineOutputCollector.java:139)
> But was 1 times. Undesired invocation:
> -> at 
> org.apache.hadoop.mapred.Task$CombineOutputCollector.collect(Task.java:1202)
> at 
> org.apache.hadoop.mapred.TestCombineOutputCollector.testDefaultCollect(TestCombineOutputCollector.java:139)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3218) ant test TestTokenCache failing on trunk

2011-11-02 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3218:
---

  Component/s: test
Fix Version/s: (was: 0.23.1)
   0.23.0

> ant test TestTokenCache failing on trunk
> 
>
> Key: MAPREDUCE-3218
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3218
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2, test
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Minor
> Fix For: 0.23.0
>
>
> Testcase: testTokenCache took 11.607 sec
> Testcase: testLocalJobTokenCache took 12.224 sec
> Testcase: testGetTokensForNamenodes took 0.009 sec
> Testcase: testGetTokensForHftpFS took 0.676 sec
> Testcase: testGetJTPrincipal took 0.023 sec
> FAILED
> Failed to substitute HOSTNAME_PATTERN with hostName expected: but 
> was:
> junit.framework.AssertionFailedError: Failed to substitute HOSTNAME_PATTERN 
> with hostName expected: but was:
> at 
> org.apache.hadoop.mapreduce.security.TestTokenCache.testGetJTPrincipal(TestTokenCache.java:392)
> Testcase: testGetTokensForViewFS took 0.019 sec

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-3216) ant test TestNoDefaultsJobConf fails on trunk

2011-11-02 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-3216:
---

Fix Version/s: (was: 0.23.1)
   0.23.0

> ant test TestNoDefaultsJobConf fails on trunk
> -
>
> Key: MAPREDUCE-3216
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3216
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Minor
> Fix For: 0.23.0
>
>
> Testcase: testNoDefaults took 4.703 sec
> Caused an ERROR
> Cannot initialize Cluster. Please check your configuration for 
> mapreduce.framework.name and the correspond server addresses.
> java.io.IOException: Cannot initialize Cluster. Please check your 
> configuration for mapreduce.framework.name and the correspond server 
> addresses.
> at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:118)
> at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:81)
> at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:74)
> at org.apache.hadoop.mapred.JobClient.init(JobClient.java:460)
> at org.apache.hadoop.mapred.JobClient.(JobClient.java:439)
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:809)
> at 
> org.apache.hadoop.conf.TestNoDefaultsJobConf.testNoDefaults(TestNoDefaultsJobConf.java:83)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (MAPREDUCE-2692) Ensure AM Restart and Recovery-on-restart is complete

2011-11-02 Thread Vinod Kumar Vavilapalli (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-2692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-2692:
---

Affects Version/s: 0.23.0
Fix Version/s: (was: 0.23.1)
   0.23.0

> Ensure AM Restart and Recovery-on-restart is complete
> -
>
> Key: MAPREDUCE-2692
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2692
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Amol Kekre
>Assignee: Sharad Agarwal
> Fix For: 0.23.0
>
>
> Need to get AM restart and the subsequent recover after restart to work

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (MAPREDUCE-3215) org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk

2011-11-02 Thread Devaraj K (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K reassigned MAPREDUCE-3215:


Assignee: Hitesh Shah  (was: Devaraj K)

> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup failing on trunk
> --
>
> Key: MAPREDUCE-3215
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-3215
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: mrv2
>Affects Versions: 0.23.0
>Reporter: Hitesh Shah
>Assignee: Hitesh Shah
>Priority: Minor
> Fix For: 0.24.0
>
>
> Testcase: testNoJobSetupCleanup took 13.271 sec
> FAILED
> Number of part-files is 0 and not 1
> junit.framework.AssertionFailedError: Number of part-files is 0 and not 1
> at 
> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup.submitAndValidateJob(TestNoJobSetupCleanup.java:60)
> at 
> org.apache.hadoop.mapreduce.TestNoJobSetupCleanup.testNoJobSetupCleanup(TestNoJobSetupCleanup.java:70)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira