[jira] [Updated] (HADOOP-8268) hadoop-project-dist/pom.xml fails XML validation

2012-05-02 Thread Radim Kolar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radim Kolar updated HADOOP-8268:


Attachment: (was: hadoop-pom.txt)

> hadoop-project-dist/pom.xml fails XML validation
> 
>
> Key: HADOOP-8268
> URL: https://issues.apache.org/jira/browse/HADOOP-8268
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
> Environment: FreeBSD 8.2 / AMD64
>Reporter: Radim Kolar
>  Labels: maven, patch
> Attachments: hadoop-pom.txt
>
>
> In this pom file are embedded ant commands which contains '>' - redirection. 
> This makes XML file invalid and this POM file can not be deployed into 
> validating Maven repository managers such as Artifactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8268) hadoop-project-dist/pom.xml fails XML validation

2012-05-02 Thread Radim Kolar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radim Kolar updated HADOOP-8268:


Attachment: hadoop-pom.txt

re-uploaded patch to see if new hadoop builds checking system will kick in

> hadoop-project-dist/pom.xml fails XML validation
> 
>
> Key: HADOOP-8268
> URL: https://issues.apache.org/jira/browse/HADOOP-8268
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
> Environment: FreeBSD 8.2 / AMD64
>Reporter: Radim Kolar
>  Labels: maven, patch
> Attachments: hadoop-pom.txt
>
>
> In this pom file are embedded ant commands which contains '>' - redirection. 
> This makes XML file invalid and this POM file can not be deployed into 
> validating Maven repository managers such as Artifactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8347) Hadoop Common logs misspell 'successful'

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267255#comment-13267255
 ] 

Hudson commented on HADOOP-8347:


Integrated in Hadoop-Common-trunk-Commit #2178 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2178/])
HADOOP-8347. Hadoop Common logs misspell 'successful'. Contributed by 
Philip Zeyliger (Revision 121)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=121
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c


> Hadoop Common logs misspell 'successful'
> 
>
> Key: HADOOP-8347
> URL: https://issues.apache.org/jira/browse/HADOOP-8347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Philip Zeyliger
> Fix For: 2.0.0
>
> Attachments: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch
>
>
> 'successfull' is a misspelling of 'successful.'  Trivial patch attached.  The 
> constants are private, and there doesn't seem to be any serialized form of 
> these comments except in log files, so this shouldn't have compatibility 
> issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8347) Hadoop Common logs misspell 'successful'

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267252#comment-13267252
 ] 

Hudson commented on HADOOP-8347:


Integrated in Hadoop-Hdfs-trunk-Commit #2252 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2252/])
HADOOP-8347. Hadoop Common logs misspell 'successful'. Contributed by 
Philip Zeyliger (Revision 121)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=121
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ServiceAuthorizationManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c


> Hadoop Common logs misspell 'successful'
> 
>
> Key: HADOOP-8347
> URL: https://issues.apache.org/jira/browse/HADOOP-8347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Philip Zeyliger
> Fix For: 2.0.0
>
> Attachments: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch
>
>
> 'successfull' is a misspelling of 'successful.'  Trivial patch attached.  The 
> constants are private, and there doesn't seem to be any serialized form of 
> these comments except in log files, so this shouldn't have compatibility 
> issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8347) Hadoop Common logs misspell 'successful'

2012-05-02 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8347:


  Resolution: Fixed
   Fix Version/s: 2.0.0
Target Version/s:   (was: 2.0.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Indeed.  I've committed this and merged. Thanks Phil!

> Hadoop Common logs misspell 'successful'
> 
>
> Key: HADOOP-8347
> URL: https://issues.apache.org/jira/browse/HADOOP-8347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Philip Zeyliger
> Fix For: 2.0.0
>
> Attachments: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch
>
>
> 'successfull' is a misspelling of 'successful.'  Trivial patch attached.  The 
> constants are private, and there doesn't seem to be any serialized form of 
> these comments except in log files, so this shouldn't have compatibility 
> issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8349) ViewFS doesn't work when the root of a file system is mounted

2012-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267250#comment-13267250
 ] 

Hadoop QA commented on HADOOP-8349:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525404/HADOOP-8349.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/925//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/925//console

This message is automatically generated.

> ViewFS doesn't work when the root of a file system is mounted
> -
>
> Key: HADOOP-8349
> URL: https://issues.apache.org/jira/browse/HADOOP-8349
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-8349.patch, HADOOP-8349.patch
>
>
> Viewing files under a ViewFS mount which mounts the root of a file system 
> shows trimmed paths. Trying to perform operations on files or directories 
> under the root-mounted file system doesn't work. More info in the first 
> comment of this JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8094) Make maven-eclipse-plugin use the spring project nature

2012-05-02 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8094:


  Resolution: Not A Problem
Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Resolved  (was: Patch Available)

Actually this whole change can be avoided if one uses M2E plugin for Eclipse. 
It works fine now (latest updates) with Apache Hadoop projects.

Resolving as not-an-issue. We should just stop using {{mvn eclipse:eclipse}} 
and get it imported inside Eclipse as maven projects instead.

> Make maven-eclipse-plugin use the spring project nature
> ---
>
> Key: HADOOP-8094
> URL: https://issues.apache.org/jira/browse/HADOOP-8094
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Harsh J
>Assignee: Harsh J
>  Labels: eclipse, maven
> Attachments: HADOOP-8094.patch
>
>
> If I want to have multiple versions of Apache Hadoop loaded into my Eclipse 
> IDE today (or any other IDE maybe), I'm supposed to do the following when 
> generating eclipse files, such that the version name is appended to the 
> project name and thereby resolves conflict in project names when I import 
> another version in:
> {{mvn -Declipse.addVersionToProjectName=true eclipse:eclipse}}
> But this does not work presently due to a lack of configuration in Apache 
> Hadoop, which https://jira.codehaus.org/browse/MECLIPSE-702 demands. The 
> problem being that though the project names are indeed named with version 
> suffixes, the "related project" name it carries for dependencies do not carry 
> the same suffix and therefore you have a broken import of projects errors 
> everywhere about 'dependent project  not found'.
> The fix is as Carlo details on https://jira.codehaus.org/browse/MECLIPSE-702 
> and it works perfectly. I'll attach a patch adding in the same configuration 
> for Apache Hadoop so that the above mechanism is then possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-02 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267242#comment-13267242
 ] 

Eli Collins commented on HADOOP-8230:
-

bq. For testing sync, with this patch, since it is enabled by default, you do 
not need the flag right?

Correct, after my patch the tests that no longer use append no longer set the 
append flag. The tests that call append to get its side effects still use the 
append flag.

Agree w Stack wrt the previous comment.  Making sync actually work is a bug 
fix, it was a bug that we allowed people to call sync and unlike append there 
wasn't a flag to enable it that was disabled by default. Better to fix the 
default behavior (which allows you to sync).

> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8349) ViewFS doesn't work when the root of a file system is mounted

2012-05-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8349:
---

Status: Patch Available  (was: Open)

> ViewFS doesn't work when the root of a file system is mounted
> -
>
> Key: HADOOP-8349
> URL: https://issues.apache.org/jira/browse/HADOOP-8349
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-8349.patch, HADOOP-8349.patch
>
>
> Viewing files under a ViewFS mount which mounts the root of a file system 
> shows trimmed paths. Trying to perform operations on files or directories 
> under the root-mounted file system doesn't work. More info in the first 
> comment of this JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8349) ViewFS doesn't work when the root of a file system is mounted

2012-05-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8349:
---

Attachment: HADOOP-8349.patch

Thanks a lot for the review, Todd. Here's an updated patch which adds some 
comments to the new test.

> ViewFS doesn't work when the root of a file system is mounted
> -
>
> Key: HADOOP-8349
> URL: https://issues.apache.org/jira/browse/HADOOP-8349
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-8349.patch, HADOOP-8349.patch
>
>
> Viewing files under a ViewFS mount which mounts the root of a file system 
> shows trimmed paths. Trying to perform operations on files or directories 
> under the root-mounted file system doesn't work. More info in the first 
> comment of this JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8347) Hadoop Common logs misspell 'successful'

2012-05-02 Thread Philip Zeyliger (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267231#comment-13267231
 ] 

Philip Zeyliger commented on HADOOP-8347:
-

Test failures seem trashy:

bq. >>> org.apache.hadoop.fs.viewfs.TestViewFsTrash.testTrash 

That's unrelated.


> Hadoop Common logs misspell 'successful'
> 
>
> Key: HADOOP-8347
> URL: https://issues.apache.org/jira/browse/HADOOP-8347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Philip Zeyliger
> Attachments: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch
>
>
> 'successfull' is a misspelling of 'successful.'  Trivial patch attached.  The 
> constants are private, and there doesn't seem to be any serialized form of 
> these comments except in log files, so this shouldn't have compatibility 
> issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267195#comment-13267195
 ] 

Devaraj Das commented on HADOOP-8346:
-

Alejandro, can you please check whether the tests pass with this patch. Thanks!

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: 8346-trunk.patch, debugger.png
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   a

[jira] [Updated] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-8346:


Attachment: 8346-trunk.patch

Reverted back to original oid names.

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: 8346-trunk.patch, debugger.png
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvid

[jira] [Created] (HADOOP-8350) Improve NetUtils.getInputStream to return a stream which has a tunable timeout

2012-05-02 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-8350:
---

 Summary: Improve NetUtils.getInputStream to return a stream which 
has a tunable timeout
 Key: HADOOP-8350
 URL: https://issues.apache.org/jira/browse/HADOOP-8350
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 1.0.0, 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon


Currently, NetUtils.getInputStream will set the timeout on the new stream based 
on the socket's configured timeout at the time of construction. After that, the 
timeout cannot be changed. This causes a problem for cases like HDFS-3357. One 
approach used in some places in the code is to construct new streams when the 
timeout has to be changed, but this can cause bugs given that the streams are 
often wrapped by BufferedInputStreams.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8349) ViewFS doesn't work when the root of a file system is mounted

2012-05-02 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267184#comment-13267184
 ] 

Todd Lipcon commented on HADOOP-8349:
-

Patch looks reasonable. Can you add some comments to the new test that explain 
what exactly it's doing? It's really hard to follow, what with the test 
inheritance going on.

> ViewFS doesn't work when the root of a file system is mounted
> -
>
> Key: HADOOP-8349
> URL: https://issues.apache.org/jira/browse/HADOOP-8349
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-8349.patch
>
>
> Viewing files under a ViewFS mount which mounts the root of a file system 
> shows trimmed paths. Trying to perform operations on files or directories 
> under the root-mounted file system doesn't work. More info in the first 
> comment of this JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8349) ViewFS doesn't work when the root of a file system is mounted

2012-05-02 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267174#comment-13267174
 ] 

Todd Lipcon commented on HADOOP-8349:
-

Mind uploading a HADOOP-only patch so test-patch can run? I'll review the 
combined patch.

> ViewFS doesn't work when the root of a file system is mounted
> -
>
> Key: HADOOP-8349
> URL: https://issues.apache.org/jira/browse/HADOOP-8349
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-8349.patch
>
>
> Viewing files under a ViewFS mount which mounts the root of a file system 
> shows trimmed paths. Trying to perform operations on files or directories 
> under the root-mounted file system doesn't work. More info in the first 
> comment of this JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-02 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267169#comment-13267169
 ] 

stack commented on HADOOP-8230:
---

bq. When an installation upgrades to a release with this patch, suddenly sync 
is enabled and there is no way to disable it.

Would such an installation be using the sync call?

> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8349) ViewFS doesn't work when the root of a file system is mounted

2012-05-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8349:
---

Attachment: HADOOP-8349.patch

Here's a patch which addresses the issue. The crux of the problem was two 
separate off-by-one errors in ChRootedFileSystem. Since to test this requires 
that we be able to mount and futz with files/directories at the root of the 
target file system, I put the test to exercise this in HDFS.

> ViewFS doesn't work when the root of a file system is mounted
> -
>
> Key: HADOOP-8349
> URL: https://issues.apache.org/jira/browse/HADOOP-8349
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-8349.patch
>
>
> Viewing files under a ViewFS mount which mounts the root of a file system 
> shows trimmed paths. Trying to perform operations on files or directories 
> under the root-mounted file system doesn't work. More info in the first 
> comment of this JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8349) ViewFS doesn't work when the root of a file system is mounted

2012-05-02 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267148#comment-13267148
 ] 

Aaron T. Myers commented on HADOOP-8349:


My NameNode is running at simon:8020. I have the following in my core-site.xml:

{code}

  fs.default.name
  viewfs://vfs-cluster

  

  fs.viewfs.mounttable.vfs-cluster.link./nn-root
  hdfs://simon:8020/

{code}

Looking at the contents of the HDFS FS directly yields the following output:
{noformat}
$ hadoop fs -ls /nn-root
Found 4 items
drwxr-xr-x   - hdfs supergroup  0 2012-05-02 20:40 /nn-root/fizz-buzz
drwxr-xr-x   - hdfs supergroup  0 2012-05-02 17:46 /nn-root/foo-bar
drwxrwxrwt   - hdfs supergroup  0 2012-04-13 16:53 /nn-root/tmp
drwxr-xr-x   - hdfs supergroup  0 2012-05-02 18:05 /nn-root/user
{noformat}
Looking at the contents via ViewFS yields the following output:
{noformat}
$ hadoop fs -ls /nn-root
Found 4 items
drwxr-xr-x   - hdfs supergroup  0 2012-05-02 20:40 /nn-root/izz-buzz
drwxrwxrwt   - hdfs supergroup  0 2012-04-13 16:53 /nn-root/mp
drwxr-xr-x   - hdfs supergroup  0 2012-05-02 17:46 /nn-root/oo-bar
drwxr-xr-x   - hdfs supergroup  0 2012-05-02 18:05 /nn-root/ser
{noformat}
Trying to make a directory via ViewFS yields the following output:
{noformat}
$ hadoop fs -mkdir /nn-root/fezz-bezz
-mkdir: Pathname  from //fezz-bezz is not a valid DFS filename.
Usage: hadoop fs [generic options] -mkdir [-p]  ...
{noformat}
Whereas trying to make a directory directly via HDFS works as expected.

> ViewFS doesn't work when the root of a file system is mounted
> -
>
> Key: HADOOP-8349
> URL: https://issues.apache.org/jira/browse/HADOOP-8349
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>
> Viewing files under a ViewFS mount which mounts the root of a file system 
> shows trimmed paths. Trying to perform operations on files or directories 
> under the root-mounted file system doesn't work. More info in the first 
> comment of this JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8349) ViewFS doesn't work when the root of a file system is mounted

2012-05-02 Thread Aaron T. Myers (JIRA)
Aaron T. Myers created HADOOP-8349:
--

 Summary: ViewFS doesn't work when the root of a file system is 
mounted
 Key: HADOOP-8349
 URL: https://issues.apache.org/jira/browse/HADOOP-8349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers


Viewing files under a ViewFS mount which mounts the root of a file system shows 
trimmed paths. Trying to perform operations on files or directories under the 
root-mounted file system doesn't work. More info in the first comment of this 
JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8346:
---

Attachment: debugger.png

@Devaraj,

trunk, *KerberosAuthenticator* class, line 200. The *servicePrincipal* var is 
'HTTP/localhost' and if you inspect the created GSSName object you'll find that 
internally it become 'HTTP/localhost/'. Attached you'll see a 
debug session of it.

To run the Kerberos testcases in hadoop-auth I do the following:

create a test.properties file in hadoop-auth/ with the following contents:


{code}
httpfs.authentication.type=kerberos
httpfs.authentication.kerberos.principal=HTTP/localhost@LOCALHOST
httpfs.authentication.kerberos.keytab=/Users/tucu/httpfs.keytab
{code}

assumes your realm is LOCALHOST, your SPNEGO principal for httpfs is 
HTTP/localhost and the keytab has that principal in it. also you have to kinit 
with a user. Then run

{code}
mvn test -PtestKerberos -Dtest=TestKerberosAuthenticator
{code}

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: debugger.png
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.sur

[jira] [Commented] (HADOOP-8348) Server$Listener.getAddress(..) may throw NullPointerException

2012-05-02 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267108#comment-13267108
 ] 

Uma Maheswara Rao G commented on HADOOP-8348:
-

Yes, Nicholas, I also seen this in my tests. HDFS-3328

> Server$Listener.getAddress(..) may throw NullPointerException
> -
>
> Key: HADOOP-8348
> URL: https://issues.apache.org/jira/browse/HADOOP-8348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo (Nicholas), SZE
>
> [Build 
> #2365|https://builds.apache.org/job/PreCommit-HDFS-Build/2365//testReport/org.apache.hadoop.hdfs/TestHFlush/testHFlushInterrupted/]:
> {noformat}
> Exception in thread "DataXceiver for client /127.0.0.1:35472 [Waiting for 
> operation #2]" java.lang.NullPointerException
>   at org.apache.hadoop.ipc.Server$Listener.getAddress(Server.java:669)
>   at org.apache.hadoop.ipc.Server.getListenerAddress(Server.java:1988)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getIpcPort(DataNode.java:882)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDisplayName(DataNode.java:863)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:177)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8279) Auto-HA: Allow manual failover to be invoked from zkfc.

2012-05-02 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HADOOP-8279.
-

  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to branch, thanks Aaron.

> Auto-HA: Allow manual failover to be invoked from zkfc.
> ---
>
> Key: HADOOP-8279
> URL: https://issues.apache.org/jira/browse/HADOOP-8279
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: auto-failover, ha
>Affects Versions: Auto Failover (HDFS-3042)
>Reporter: Mingjie Lai
>Assignee: Todd Lipcon
> Fix For: Auto Failover (HDFS-3042)
>
> Attachments: hadoop-8279.txt, hadoop-8279.txt, hadoop-8279.txt, 
> hadoop-8279.txt, hadoop-8279.txt
>
>
> HADOOP-8247 introduces a configure flag to prevent potential status 
> inconsistency between zkfc and namenode, by making auto and manual failover 
> mutually exclusive.
> However, as described in 2.7.2 section of design doc at HDFS-2185, we should 
> allow manual and auto failover co-exist, by:
> - adding some rpc interfaces at zkfc
> - manual failover shall be triggered by haadmin, and handled by zkfc if auto 
> failover is enabled. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das reassigned HADOOP-8346:
---

Assignee: Devaraj Das

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 2.0.0
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(

[jira] [Commented] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267087#comment-13267087
 ] 

Devaraj Das commented on HADOOP-8346:
-

I'll take a look at this..
@Alejandro, can you please provide some more detail if you have on where the 
extra element to the principal is getting added.  Thanks!

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Priority: Blocker
> Fix For: 2.0.0
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org

[jira] [Updated] (HADOOP-8348) Server$Listener.getAddress(..) may throw NullPointerException

2012-05-02 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8348:
---

Description: 
[Build 
#2365|https://builds.apache.org/job/PreCommit-HDFS-Build/2365//testReport/org.apache.hadoop.hdfs/TestHFlush/testHFlushInterrupted/]:
{noformat}
Exception in thread "DataXceiver for client /127.0.0.1:35472 [Waiting for 
operation #2]" java.lang.NullPointerException
at org.apache.hadoop.ipc.Server$Listener.getAddress(Server.java:669)
at org.apache.hadoop.ipc.Server.getListenerAddress(Server.java:1988)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getIpcPort(DataNode.java:882)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDisplayName(DataNode.java:863)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:177)
at java.lang.Thread.run(Thread.java:662)
{noformat}

  was:
{noformat}
Exception in thread "DataXceiver for client /127.0.0.1:35472 [Waiting for 
operation #2]" java.lang.NullPointerException
at org.apache.hadoop.ipc.Server$Listener.getAddress(Server.java:669)
at org.apache.hadoop.ipc.Server.getListenerAddress(Server.java:1988)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getIpcPort(DataNode.java:882)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDisplayName(DataNode.java:863)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:177)
at java.lang.Thread.run(Thread.java:662)
{noformat}

Summary: Server$Listener.getAddress(..) may throw NullPointerException  
(was: Server$Listener.getAddress(..) may thow NullPointerException)

> Server$Listener.getAddress(..) may throw NullPointerException
> -
>
> Key: HADOOP-8348
> URL: https://issues.apache.org/jira/browse/HADOOP-8348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo (Nicholas), SZE
>
> [Build 
> #2365|https://builds.apache.org/job/PreCommit-HDFS-Build/2365//testReport/org.apache.hadoop.hdfs/TestHFlush/testHFlushInterrupted/]:
> {noformat}
> Exception in thread "DataXceiver for client /127.0.0.1:35472 [Waiting for 
> operation #2]" java.lang.NullPointerException
>   at org.apache.hadoop.ipc.Server$Listener.getAddress(Server.java:669)
>   at org.apache.hadoop.ipc.Server.getListenerAddress(Server.java:1988)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getIpcPort(DataNode.java:882)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getDisplayName(DataNode.java:863)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:177)
>   at java.lang.Thread.run(Thread.java:662)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8347) Hadoop Common logs misspell 'successful'

2012-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267078#comment-13267078
 ] 

Hadoop QA commented on HADOOP-8347:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12525363/0001-HADOOP-8347.-Fixing-spelling-of-successful.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/924//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/924//console

This message is automatically generated.

> Hadoop Common logs misspell 'successful'
> 
>
> Key: HADOOP-8347
> URL: https://issues.apache.org/jira/browse/HADOOP-8347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Philip Zeyliger
> Attachments: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch
>
>
> 'successfull' is a misspelling of 'successful.'  Trivial patch attached.  The 
> constants are private, and there doesn't seem to be any serialized form of 
> these comments except in log files, so this shouldn't have compatibility 
> issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8348) Server$Listener.getAddress(..) may thow NullPointerException

2012-05-02 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8348:
--

 Summary: Server$Listener.getAddress(..) may thow 
NullPointerException
 Key: HADOOP-8348
 URL: https://issues.apache.org/jira/browse/HADOOP-8348
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo (Nicholas), SZE


{noformat}
Exception in thread "DataXceiver for client /127.0.0.1:35472 [Waiting for 
operation #2]" java.lang.NullPointerException
at org.apache.hadoop.ipc.Server$Listener.getAddress(Server.java:669)
at org.apache.hadoop.ipc.Server.getListenerAddress(Server.java:1988)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getIpcPort(DataNode.java:882)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDisplayName(DataNode.java:863)
at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:177)
at java.lang.Thread.run(Thread.java:662)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267071#comment-13267071
 ] 

Eli Collins commented on HADOOP-8346:
-

Let's revert and re-open HADOOP-6941 since that's dependent on other changes 
that are not yet complete anyway.

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Priority: Blocker
> Fix For: 2.0.0
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invok

[jira] [Commented] (HADOOP-8347) Hadoop Common logs misspell 'successful'

2012-05-02 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267070#comment-13267070
 ] 

Eli Collins commented on HADOOP-8347:
-

+1 thanks Phil

> Hadoop Common logs misspell 'successful'
> 
>
> Key: HADOOP-8347
> URL: https://issues.apache.org/jira/browse/HADOOP-8347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Philip Zeyliger
> Attachments: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch
>
>
> 'successfull' is a misspelling of 'successful.'  Trivial patch attached.  The 
> constants are private, and there doesn't seem to be any serialized form of 
> these comments except in log files, so this shouldn't have compatibility 
> issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267036#comment-13267036
 ] 

Hudson commented on HADOOP-8214:


Integrated in Hadoop-Mapreduce-trunk-Commit #2192 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2192/])
HADOOP-8214. make hadoop script recognize a full set of deprecated commands 
(rvs via tucu) (Revision 1333231)

 Result = ABORTED
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333231
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop


> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8347) Hadoop Common logs misspell 'successful'

2012-05-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8347:
---

 Target Version/s: 2.0.0
Affects Version/s: 2.0.0

> Hadoop Common logs misspell 'successful'
> 
>
> Key: HADOOP-8347
> URL: https://issues.apache.org/jira/browse/HADOOP-8347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Philip Zeyliger
> Attachments: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch
>
>
> 'successfull' is a misspelling of 'successful.'  Trivial patch attached.  The 
> constants are private, and there doesn't seem to be any serialized form of 
> these comments except in log files, so this shouldn't have compatibility 
> issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8347) Hadoop Common logs misspell 'successful'

2012-05-02 Thread Philip Zeyliger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Zeyliger updated HADOOP-8347:


Status: Patch Available  (was: Open)

> Hadoop Common logs misspell 'successful'
> 
>
> Key: HADOOP-8347
> URL: https://issues.apache.org/jira/browse/HADOOP-8347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Philip Zeyliger
>Assignee: Philip Zeyliger
> Attachments: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch
>
>
> 'successfull' is a misspelling of 'successful.'  Trivial patch attached.  The 
> constants are private, and there doesn't seem to be any serialized form of 
> these comments except in log files, so this shouldn't have compatibility 
> issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8347) Hadoop Common logs misspell 'successful'

2012-05-02 Thread Philip Zeyliger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Zeyliger updated HADOOP-8347:


Attachment: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch

> Hadoop Common logs misspell 'successful'
> 
>
> Key: HADOOP-8347
> URL: https://issues.apache.org/jira/browse/HADOOP-8347
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Philip Zeyliger
>Assignee: Philip Zeyliger
> Attachments: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch
>
>
> 'successfull' is a misspelling of 'successful.'  Trivial patch attached.  The 
> constants are private, and there doesn't seem to be any serialized form of 
> these comments except in log files, so this shouldn't have compatibility 
> issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8347) Hadoop Common logs misspell 'successful'

2012-05-02 Thread Philip Zeyliger (JIRA)
Philip Zeyliger created HADOOP-8347:
---

 Summary: Hadoop Common logs misspell 'successful'
 Key: HADOOP-8347
 URL: https://issues.apache.org/jira/browse/HADOOP-8347
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Philip Zeyliger
Assignee: Philip Zeyliger
 Attachments: 0001-HADOOP-8347.-Fixing-spelling-of-successful.patch

'successfull' is a misspelling of 'successful.'  Trivial patch attached.  The 
constants are private, and there doesn't seem to be any serialized form of 
these comments except in log files, so this shouldn't have compatibility issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267025#comment-13267025
 ] 

Hudson commented on HADOOP-8214:


Integrated in Hadoop-Common-trunk-Commit #2175 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2175/])
HADOOP-8214. make hadoop script recognize a full set of deprecated commands 
(rvs via tucu) (Revision 1333231)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333231
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop


> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267021#comment-13267021
 ] 

Hudson commented on HADOOP-8214:


Integrated in Hadoop-Hdfs-trunk-Commit #2249 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2249/])
HADOOP-8214. make hadoop script recognize a full set of deprecated commands 
(rvs via tucu) (Revision 1333231)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333231
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop


> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8214:
---

   Resolution: Fixed
Fix Version/s: (was: 0.23.2)
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

thanks Roman. committed to trunk and branch-2

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8307) The task-controller is not packaged in the tarball

2012-05-02 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266977#comment-13266977
 ] 

Matt Foley commented on HADOOP-8307:


I have reverted only the build.xml portion of that merge, which was apparently 
accidentally included.

Does that mean we don't need HADOOP-8307 any more?

> The task-controller is not packaged in the tarball
> --
>
> Key: HADOOP-8307
> URL: https://issues.apache.org/jira/browse/HADOOP-8307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.3
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: hadoop-8307.patch
>
>
> Ant in some situations, puts artifacts such as task-controller into the 
> build/hadoop-*/ directory before the "package" target deletes it to start 
> over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (HADOOP-8307) The task-controller is not packaged in the tarball

2012-05-02 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266977#comment-13266977
 ] 

Matt Foley edited comment on HADOOP-8307 at 5/2/12 10:44 PM:
-

I have reverted only the build.xml portion of that merge, which was apparently 
accidentally included.
See 
https://issues.apache.org/jira/browse/MAPREDUCE-3377?focusedCommentId=13266972&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13266972

Does that mean we don't need HADOOP-8307 any more?

  was (Author: mattf):
I have reverted only the build.xml portion of that merge, which was 
apparently accidentally included.

Does that mean we don't need HADOOP-8307 any more?
  
> The task-controller is not packaged in the tarball
> --
>
> Key: HADOOP-8307
> URL: https://issues.apache.org/jira/browse/HADOOP-8307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.3
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: hadoop-8307.patch
>
>
> Ant in some situations, puts artifacts such as task-controller into the 
> build/hadoop-*/ directory before the "package" target deletes it to start 
> over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8342) HDFS command fails with exception following merge of HADOOP-8325

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266975#comment-13266975
 ] 

Hudson commented on HADOOP-8342:


Integrated in Hadoop-Mapreduce-trunk-Commit #2191 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2191/])
HADOOP-8342. HDFS command fails with exception following merge of 
HADOOP-8325 (tucu) (Revision 1333224)

 Result = ABORTED
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333224
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> HDFS command fails with exception following merge of HADOOP-8325
> 
>
> Key: HADOOP-8342
> URL: https://issues.apache.org/jira/browse/HADOOP-8342
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
> Environment: QE tests on version 2.0.1205010603
>Reporter: Randy Clayton
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3337.patch
>
>
> We are seeing most hdfs commands in our nightly acceptance tests fail with an 
> exception as shown below. This started with a few hours of the merge of 
> HADOOP-8325 on 4/30/2012
> hdfs --config conf/hadoop/ dfs -ls dirname
> ls: `dirname': No such file or directory
> 12/05/01 16:57:52 WARN util.ShutdownHookManager: ShutdownHook 
> 'ClientFinalizer' failed, java.lang.IllegalStateException: Shutdown in 
> progress, cannot remove a shutdownHook
> java.lang.IllegalStateException: Shutdown in progress, cannot remove a 
> shutdownHook
>   at 
> org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:166)
>   at org.apache.hadoop.fs.FileSystem$Cache.remove(FileSystem.java:2202)
>   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2251)
>   at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266974#comment-13266974
 ] 

Hudson commented on HADOOP-8325:


Integrated in Hadoop-Mapreduce-trunk-Commit #2191 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2191/])
HADOOP-8342. HDFS command fails with exception following merge of 
HADOOP-8325 (tucu) (Revision 1333224)

 Result = ABORTED
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333224
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> Add a ShutdownHookManager to be used by different components instead of the 
> JVM shutdownhook
> 
>
> Key: HADOOP-8325
> URL: https://issues.apache.org/jira/browse/HADOOP-8325
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch
>
>
> FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
> MRAppMaster also uses a JVM shutdown hook, among other things, the 
> MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
> This creates a race condition because each JVM shutdown hook is a separate 
> thread and if there are multiple JVM shutdown hooks there is not assurance of 
> order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-02 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266970#comment-13266970
 ] 

Alejandro Abdelnur commented on HADOOP-8343:


javadoc warnings seem unrelated

> Allow configuration of authorization for JmxJsonServlet and MetricsServlet
> --
>
> Key: HADOOP-8343
> URL: https://issues.apache.org/jira/browse/HADOOP-8343
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8343.patch, HADOOP-8343.patch
>
>
> When using authorization for the daemons' web server, it would be useful to 
> specifically control the authorization requirements for accessing /jmx and 
> /metrics.  Currently, they require administrative access.  This JIRA would 
> propose that whether or not they are available to administrators only or to 
> all users be controlled by "hadoop.instrumentation.requires.administrator" 
> (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-8346:
--

 Summary: Changes to support Kerberos with non Sun JVM 
(HADOOP-6941) broke SPNEGO
 Key: HADOOP-8346
 URL: https://issues.apache.org/jira/browse/HADOOP-8346
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0, 3.0.0
Reporter: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.0.0


before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
-PtestKerberos*

after HADOOP-6941 the tests fail with the error below.

Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
making the JVM Kerberos libraries to append an extra element to the kerberos 
principal of the server (on the client side when creating the token) so 
*HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
contacting the KDC to get the granting ticket, the server principal is unknown.

{code}
testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
  Time elapsed: 0.053 sec  <<< ERROR!
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Server not found 
in Kerberos database (7) - UNKNOWN_SERVER)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
at 
org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
at 
org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
at 
org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
at 
org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
at 
org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
at 
org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)
Caused by: GSSException: No valid credentials provided (Mechanism level: Server 
not found in Kerberos database (7) - UNKNOWN_SERVER)
at 
sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:6

[jira] [Commented] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266951#comment-13266951
 ] 

Hudson commented on HADOOP-8325:


Integrated in Hadoop-Common-trunk-Commit #2174 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2174/])
HADOOP-8342. HDFS command fails with exception following merge of 
HADOOP-8325 (tucu) (Revision 1333224)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333224
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> Add a ShutdownHookManager to be used by different components instead of the 
> JVM shutdownhook
> 
>
> Key: HADOOP-8325
> URL: https://issues.apache.org/jira/browse/HADOOP-8325
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch
>
>
> FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
> MRAppMaster also uses a JVM shutdown hook, among other things, the 
> MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
> This creates a race condition because each JVM shutdown hook is a separate 
> thread and if there are multiple JVM shutdown hooks there is not assurance of 
> order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8342) HDFS command fails with exception following merge of HADOOP-8325

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266952#comment-13266952
 ] 

Hudson commented on HADOOP-8342:


Integrated in Hadoop-Common-trunk-Commit #2174 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2174/])
HADOOP-8342. HDFS command fails with exception following merge of 
HADOOP-8325 (tucu) (Revision 1333224)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333224
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> HDFS command fails with exception following merge of HADOOP-8325
> 
>
> Key: HADOOP-8342
> URL: https://issues.apache.org/jira/browse/HADOOP-8342
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
> Environment: QE tests on version 2.0.1205010603
>Reporter: Randy Clayton
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3337.patch
>
>
> We are seeing most hdfs commands in our nightly acceptance tests fail with an 
> exception as shown below. This started with a few hours of the merge of 
> HADOOP-8325 on 4/30/2012
> hdfs --config conf/hadoop/ dfs -ls dirname
> ls: `dirname': No such file or directory
> 12/05/01 16:57:52 WARN util.ShutdownHookManager: ShutdownHook 
> 'ClientFinalizer' failed, java.lang.IllegalStateException: Shutdown in 
> progress, cannot remove a shutdownHook
> java.lang.IllegalStateException: Shutdown in progress, cannot remove a 
> shutdownHook
>   at 
> org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:166)
>   at org.apache.hadoop.fs.FileSystem$Cache.remove(FileSystem.java:2202)
>   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2251)
>   at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8342) HDFS command fails with exception following merge of HADOOP-8325

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266947#comment-13266947
 ] 

Hudson commented on HADOOP-8342:


Integrated in Hadoop-Hdfs-trunk-Commit #2248 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2248/])
HADOOP-8342. HDFS command fails with exception following merge of 
HADOOP-8325 (tucu) (Revision 1333224)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333224
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> HDFS command fails with exception following merge of HADOOP-8325
> 
>
> Key: HADOOP-8342
> URL: https://issues.apache.org/jira/browse/HADOOP-8342
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
> Environment: QE tests on version 2.0.1205010603
>Reporter: Randy Clayton
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3337.patch
>
>
> We are seeing most hdfs commands in our nightly acceptance tests fail with an 
> exception as shown below. This started with a few hours of the merge of 
> HADOOP-8325 on 4/30/2012
> hdfs --config conf/hadoop/ dfs -ls dirname
> ls: `dirname': No such file or directory
> 12/05/01 16:57:52 WARN util.ShutdownHookManager: ShutdownHook 
> 'ClientFinalizer' failed, java.lang.IllegalStateException: Shutdown in 
> progress, cannot remove a shutdownHook
> java.lang.IllegalStateException: Shutdown in progress, cannot remove a 
> shutdownHook
>   at 
> org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:166)
>   at org.apache.hadoop.fs.FileSystem$Cache.remove(FileSystem.java:2202)
>   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2251)
>   at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8325) Add a ShutdownHookManager to be used by different components instead of the JVM shutdownhook

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266946#comment-13266946
 ] 

Hudson commented on HADOOP-8325:


Integrated in Hadoop-Hdfs-trunk-Commit #2248 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2248/])
HADOOP-8342. HDFS command fails with exception following merge of 
HADOOP-8325 (tucu) (Revision 1333224)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333224
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


> Add a ShutdownHookManager to be used by different components instead of the 
> JVM shutdownhook
> 
>
> Key: HADOOP-8325
> URL: https://issues.apache.org/jira/browse/HADOOP-8325
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, HADOOP-8325.patch, 
> HADOOP-8325.patch
>
>
> FileSystem adds a JVM shutdown hook when a filesystem instance is cached.
> MRAppMaster also uses a JVM shutdown hook, among other things, the 
> MRAppMaster JVM shutdown hook is used to ensure state are written to HDFS.
> This creates a race condition because each JVM shutdown hook is a separate 
> thread and if there are multiple JVM shutdown hooks there is not assurance of 
> order of execution, they could even run in parallel.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8342) HDFS command fails with exception following merge of HADOOP-8325

2012-05-02 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8342:
---

   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

committed to trunk and branch-2

> HDFS command fails with exception following merge of HADOOP-8325
> 
>
> Key: HADOOP-8342
> URL: https://issues.apache.org/jira/browse/HADOOP-8342
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
> Environment: QE tests on version 2.0.1205010603
>Reporter: Randy Clayton
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3337.patch
>
>
> We are seeing most hdfs commands in our nightly acceptance tests fail with an 
> exception as shown below. This started with a few hours of the merge of 
> HADOOP-8325 on 4/30/2012
> hdfs --config conf/hadoop/ dfs -ls dirname
> ls: `dirname': No such file or directory
> 12/05/01 16:57:52 WARN util.ShutdownHookManager: ShutdownHook 
> 'ClientFinalizer' failed, java.lang.IllegalStateException: Shutdown in 
> progress, cannot remove a shutdownHook
> java.lang.IllegalStateException: Shutdown in progress, cannot remove a 
> shutdownHook
>   at 
> org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:166)
>   at org.apache.hadoop.fs.FileSystem$Cache.remove(FileSystem.java:2202)
>   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2251)
>   at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8342) HDFS command fails with exception following merge of HADOOP-8325

2012-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266943#comment-13266943
 ] 

Hadoop QA commented on HADOOP-8342:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525211/HDFS-3337.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/923//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/923//console

This message is automatically generated.

> HDFS command fails with exception following merge of HADOOP-8325
> 
>
> Key: HADOOP-8342
> URL: https://issues.apache.org/jira/browse/HADOOP-8342
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
> Environment: QE tests on version 2.0.1205010603
>Reporter: Randy Clayton
>Assignee: Alejandro Abdelnur
> Fix For: 2.0.0
>
> Attachments: HDFS-3337.patch
>
>
> We are seeing most hdfs commands in our nightly acceptance tests fail with an 
> exception as shown below. This started with a few hours of the merge of 
> HADOOP-8325 on 4/30/2012
> hdfs --config conf/hadoop/ dfs -ls dirname
> ls: `dirname': No such file or directory
> 12/05/01 16:57:52 WARN util.ShutdownHookManager: ShutdownHook 
> 'ClientFinalizer' failed, java.lang.IllegalStateException: Shutdown in 
> progress, cannot remove a shutdownHook
> java.lang.IllegalStateException: Shutdown in progress, cannot remove a 
> shutdownHook
>   at 
> org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:166)
>   at org.apache.hadoop.fs.FileSystem$Cache.remove(FileSystem.java:2202)
>   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2251)
>   at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266942#comment-13266942
 ] 

Hadoop QA commented on HADOOP-8343:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525350/HADOOP-8343.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/922//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/922//console

This message is automatically generated.

> Allow configuration of authorization for JmxJsonServlet and MetricsServlet
> --
>
> Key: HADOOP-8343
> URL: https://issues.apache.org/jira/browse/HADOOP-8343
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8343.patch, HADOOP-8343.patch
>
>
> When using authorization for the daemons' web server, it would be useful to 
> specifically control the authorization requirements for accessing /jmx and 
> /metrics.  Currently, they require administrative access.  This JIRA would 
> propose that whether or not they are available to administrators only or to 
> all users be controlled by "hadoop.instrumentation.requires.administrator" 
> (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-02 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8343:
---

Attachment: HADOOP-8343.patch

> Allow configuration of authorization for JmxJsonServlet and MetricsServlet
> --
>
> Key: HADOOP-8343
> URL: https://issues.apache.org/jira/browse/HADOOP-8343
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8343.patch, HADOOP-8343.patch
>
>
> When using authorization for the daemons' web server, it would be useful to 
> specifically control the authorization requirements for accessing /jmx and 
> /metrics.  Currently, they require administrative access.  This JIRA would 
> propose that whether or not they are available to administrators only or to 
> all users be controlled by "hadoop.instrumentation.requires.administrator" 
> (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-02 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8343:
---

Attachment: (was: HADOOP-8343.patch)

> Allow configuration of authorization for JmxJsonServlet and MetricsServlet
> --
>
> Key: HADOOP-8343
> URL: https://issues.apache.org/jira/browse/HADOOP-8343
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8343.patch, HADOOP-8343.patch
>
>
> When using authorization for the daemons' web server, it would be useful to 
> specifically control the authorization requirements for accessing /jmx and 
> /metrics.  Currently, they require administrative access.  This JIRA would 
> propose that whether or not they are available to administrators only or to 
> all users be controlled by "hadoop.instrumentation.requires.administrator" 
> (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-02 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8343:
---

Status: Patch Available  (was: Open)

> Allow configuration of authorization for JmxJsonServlet and MetricsServlet
> --
>
> Key: HADOOP-8343
> URL: https://issues.apache.org/jira/browse/HADOOP-8343
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8343.patch, HADOOP-8343.patch
>
>
> When using authorization for the daemons' web server, it would be useful to 
> specifically control the authorization requirements for accessing /jmx and 
> /metrics.  Currently, they require administrative access.  This JIRA would 
> propose that whether or not they are available to administrators only or to 
> all users be controlled by "hadoop.instrumentation.requires.administrator" 
> (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-02 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8343:
---

Attachment: HADOOP-8343.patch

After further digging I think figured out how things are supposed to work:

# the instrumentation servlets (stacks/, logLevel/, conf/, metrics/, jmx/) are 
not to be authentication protected by the built-in SPNEGO filter.
# the instrumentation servlets are authentication protected if an custom filter 
(via FilterInitializer) is added.
# the instrumentation servlets had a check hasAdminAccess() that guards it 
access restricting access to admin users if security/authorization is ON. This 
check was incorrect and was fixed by HADOOP-8314

HADOOP-8314 fix had a side effect of disabling access to instrumentation if the 
user is not in an ACL.

While that may be desirable in certain deployments, it is quite common (and 
reasonable) to have instrumentation access without requiring authentication or 
authorization.

The attached patch then introduces (as the original approach suggested) a 
property *hadoop.security.authorization.for.instrumentation* to enforce or not 
authorization on the instrumentation servlets. The patch does not do any 
changes related to authentication requirements (which can still be done adding 
a filter via a filter initializer). The patch modifies the 5 instrumentation 
servlets to use the new logic (encapsulated in the 
*checkInstrumentationAccess()* method)


> Allow configuration of authorization for JmxJsonServlet and MetricsServlet
> --
>
> Key: HADOOP-8343
> URL: https://issues.apache.org/jira/browse/HADOOP-8343
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8343.patch, HADOOP-8343.patch
>
>
> When using authorization for the daemons' web server, it would be useful to 
> specifically control the authorization requirements for accessing /jmx and 
> /metrics.  Currently, they require administrative access.  This JIRA would 
> propose that whether or not they are available to administrators only or to 
> all users be controlled by "hadoop.instrumentation.requires.administrator" 
> (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8342) HDFS command fails with exception following merge of HADOOP-8325

2012-05-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8342:
---

Target Version/s: 2.0.0
  Status: Patch Available  (was: Open)

Marking PA for Tucu.

> HDFS command fails with exception following merge of HADOOP-8325
> 
>
> Key: HADOOP-8342
> URL: https://issues.apache.org/jira/browse/HADOOP-8342
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
> Environment: QE tests on version 2.0.1205010603
>Reporter: Randy Clayton
>Assignee: Alejandro Abdelnur
> Attachments: HDFS-3337.patch
>
>
> We are seeing most hdfs commands in our nightly acceptance tests fail with an 
> exception as shown below. This started with a few hours of the merge of 
> HADOOP-8325 on 4/30/2012
> hdfs --config conf/hadoop/ dfs -ls dirname
> ls: `dirname': No such file or directory
> 12/05/01 16:57:52 WARN util.ShutdownHookManager: ShutdownHook 
> 'ClientFinalizer' failed, java.lang.IllegalStateException: Shutdown in 
> progress, cannot remove a shutdownHook
> java.lang.IllegalStateException: Shutdown in progress, cannot remove a 
> shutdownHook
>   at 
> org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:166)
>   at org.apache.hadoop.fs.FileSystem$Cache.remove(FileSystem.java:2202)
>   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2251)
>   at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8307) The task-controller is not packaged in the tarball

2012-05-02 Thread Giridharan Kesavan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266845#comment-13266845
 ] 

Giridharan Kesavan commented on HADOOP-8307:


this issue is introduced by merge 
"http://svn.apache.org/viewvc?view=revision&revision=1306744";. 
which removed the subant call on task-controller and jsvc ant target. This 
merge will also affect the rpm and deb packages of hadoop by not having 
task-controller binary packaged. 
I propose we reverted the above said merge which would solve the issue of not 
packaging task-controller binary in rpm, deb and tar ball.

> The task-controller is not packaged in the tarball
> --
>
> Key: HADOOP-8307
> URL: https://issues.apache.org/jira/browse/HADOOP-8307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.3
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: hadoop-8307.patch
>
>
> Ant in some situations, puts artifacts such as task-controller into the 
> build/hadoop-*/ directory before the "package" target deletes it to start 
> over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8307) The task-controller is not packaged in the tarball

2012-05-02 Thread Giridharan Kesavan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266843#comment-13266843
 ] 

Giridharan Kesavan commented on HADOOP-8307:


This issue is introduced by merge 
"http://svn.apache.org/viewvc?view=revision&revision=1306744";. 
which removed the subant call on task-controller and jsvc ant target. This 
merge will also affect the rpm and deb packages of hadoop by not having 
task-controller binary packaged. 
I propose we reverted the above said merge which would solve the issue of not 
having task-controller binary in rpm, deb and tar ball.
 

> The task-controller is not packaged in the tarball
> --
>
> Key: HADOOP-8307
> URL: https://issues.apache.org/jira/browse/HADOOP-8307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.3
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: hadoop-8307.patch
>
>
> Ant in some situations, puts artifacts such as task-controller into the 
> build/hadoop-*/ directory before the "package" target deletes it to start 
> over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8185) Update namenode -format documentation and add -nonInteractive and -force

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266824#comment-13266824
 ] 

Hudson commented on HADOOP-8185:


Integrated in Hadoop-Mapreduce-trunk-Commit #2189 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2189/])
HADOOP-8185. Update namenode -format documentation and add -nonInteractive 
and -force. Contributed by Arpit Gupta. (Revision 1333141)

 Result = ABORTED
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333141
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/commands_manual.xml


> Update namenode -format documentation and add -nonInteractive and -force
> 
>
> Key: HADOOP-8185
> URL: https://issues.apache.org/jira/browse/HADOOP-8185
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Fix For: 2.0.0
>
> Attachments: HADOOP-8185.patch, HADOOP-8185.patch, HADOOP-8185.patch, 
> HADOOP-8185.patch
>
>
> documentation changes related to HDFS-3094

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8185) Update namenode -format documentation and add -nonInteractive and -force

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266810#comment-13266810
 ] 

Hudson commented on HADOOP-8185:


Integrated in Hadoop-Hdfs-trunk-Commit #2247 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2247/])
HADOOP-8185. Update namenode -format documentation and add -nonInteractive 
and -force. Contributed by Arpit Gupta. (Revision 1333141)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333141
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/commands_manual.xml


> Update namenode -format documentation and add -nonInteractive and -force
> 
>
> Key: HADOOP-8185
> URL: https://issues.apache.org/jira/browse/HADOOP-8185
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Fix For: 2.0.0
>
> Attachments: HADOOP-8185.patch, HADOOP-8185.patch, HADOOP-8185.patch, 
> HADOOP-8185.patch
>
>
> documentation changes related to HDFS-3094

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8185) Update namenode -format documentation and add -nonInteractive and -force

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266805#comment-13266805
 ] 

Hudson commented on HADOOP-8185:


Integrated in Hadoop-Common-trunk-Commit #2173 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2173/])
HADOOP-8185. Update namenode -format documentation and add -nonInteractive 
and -force. Contributed by Arpit Gupta. (Revision 1333141)

 Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1333141
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/commands_manual.xml


> Update namenode -format documentation and add -nonInteractive and -force
> 
>
> Key: HADOOP-8185
> URL: https://issues.apache.org/jira/browse/HADOOP-8185
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Fix For: 2.0.0
>
> Attachments: HADOOP-8185.patch, HADOOP-8185.patch, HADOOP-8185.patch, 
> HADOOP-8185.patch
>
>
> documentation changes related to HDFS-3094

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-02 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266803#comment-13266803
 ] 

Alejandro Abdelnur commented on HADOOP-8343:


the scope of this auth requirement should be extended to the stacks/ and 
logLevel/ servlets

> Allow configuration of authorization for JmxJsonServlet and MetricsServlet
> --
>
> Key: HADOOP-8343
> URL: https://issues.apache.org/jira/browse/HADOOP-8343
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8343.patch
>
>
> When using authorization for the daemons' web server, it would be useful to 
> specifically control the authorization requirements for accessing /jmx and 
> /metrics.  Currently, they require administrative access.  This JIRA would 
> propose that whether or not they are available to administrators only or to 
> all users be controlled by "hadoop.instrumentation.requires.administrator" 
> (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266799#comment-13266799
 ] 

Hadoop QA commented on HADOOP-8214:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525332/HADOOP-8214.patch.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/921//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/921//console

This message is automatically generated.

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8345) HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter

2012-05-02 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266796#comment-13266796
 ] 

Alejandro Abdelnur commented on HADOOP-8345:


Also we we should probably revisit how the webui gets protected with SPENGO, 
which at the moment is documented as adding the 
AuthenticationFilterInitializer. If we already have the SPNEGO filter 
registered, it would be just matters of handling that in the code. This 
handling should be done with a switch as web UIs in many setups may not want to 
use SPNEGO as browsers are not configured to use Kerberos (instead they want to 
use the own custom intranet authentication)

> HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter
> -
>
> Key: HADOOP-8345
> URL: https://issues.apache.org/jira/browse/HADOOP-8345
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Alejandro Abdelnur
>Priority: Critical
> Fix For: 2.0.0
>
>
> It seems the mapping was added to fullfil HDFS requirements, where the SPNEGO 
> filter is registered.
> The registration o the SPNEGO filter should be done at common level instead 
> to it is avail for all components using HttpServer if security is ON.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8345) HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter

2012-05-02 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-8345:
--

 Summary: HttpServer adds SPNEGO filter mapping but does not 
register the SPNEGO filter
 Key: HADOOP-8345
 URL: https://issues.apache.org/jira/browse/HADOOP-8345
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0
Reporter: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.0


It seems the mapping was added to fullfil HDFS requirements, where the SPNEGO 
filter is registered.

The registration o the SPNEGO filter should be done at common level instead to 
it is avail for all components using HttpServer if security is ON.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266794#comment-13266794
 ] 

Alejandro Abdelnur commented on HADOOP-8214:


+1

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8185) Update namenode -format documentation and add -nonInteractive and -force

2012-05-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8185:
---

   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've just committed this to branch-2 and trunk. Thanks a lot for the 
contribution, Arpit.

> Update namenode -format documentation and add -nonInteractive and -force
> 
>
> Key: HADOOP-8185
> URL: https://issues.apache.org/jira/browse/HADOOP-8185
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Fix For: 2.0.0
>
> Attachments: HADOOP-8185.patch, HADOOP-8185.patch, HADOOP-8185.patch, 
> HADOOP-8185.patch
>
>
> documentation changes related to HDFS-3094

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8344) Improve test-patch to make it easier to find javadoc warnings

2012-05-02 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-8344:
---

 Summary: Improve test-patch to make it easier to find javadoc 
warnings
 Key: HADOOP-8344
 URL: https://issues.apache.org/jira/browse/HADOOP-8344
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Reporter: Todd Lipcon
Priority: Minor


Often I have to spend a lot of time digging through logs to find javadoc 
warnings as the result of a test-patch. Similar to the improvement made in 
HADOOP-8339, we should do the following:
- test-patch should only run javadoc on modules that have changed
- the exclusions "OK_JAVADOC" should be per-project rather than cross-project
- rather than just have a number, we should check in the actual list of 
warnings to ignore and then fuzzy-match the patch warnings against the exclude 
list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8185) Update namenode -format documentation and add -nonInteractive and -force

2012-05-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8185:
---

Component/s: documentation

> Update namenode -format documentation and add -nonInteractive and -force
> 
>
> Key: HADOOP-8185
> URL: https://issues.apache.org/jira/browse/HADOOP-8185
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Attachments: HADOOP-8185.patch, HADOOP-8185.patch, HADOOP-8185.patch, 
> HADOOP-8185.patch
>
>
> documentation changes related to HDFS-3094

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-02 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8343:
---

Status: Open  (was: Patch Available)

After some investigation on how the HttpServer binds the JMX and METRICS 
servlets (hardcoded not to add the SPNEGO filter) it seems to me that the 
correct approach would be:

* have a 'hadoop.security.require.authentication.for.instrumentation' config 
property set to FALSE by default.
* HttpServer addition of JMX, METRICS and CONF servlets should register the 
servlets to require authentication or not based on the above property.
* remove the hasAdminAccess check for the JMX, METRICS and CONF servlets.


> Allow configuration of authorization for JmxJsonServlet and MetricsServlet
> --
>
> Key: HADOOP-8343
> URL: https://issues.apache.org/jira/browse/HADOOP-8343
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8343.patch
>
>
> When using authorization for the daemons' web server, it would be useful to 
> specifically control the authorization requirements for accessing /jmx and 
> /metrics.  Currently, they require administrative access.  This JIRA would 
> propose that whether or not they are available to administrators only or to 
> all users be controlled by "hadoop.instrumentation.requires.administrator" 
> (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8185) Update namenode -format documentation and add -nonInteractive and -force

2012-05-02 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-8185:
---

 Target Version/s: 2.0.0
Affects Version/s: (was: 0.24.0)
   2.0.0

> Update namenode -format documentation and add -nonInteractive and -force
> 
>
> Key: HADOOP-8185
> URL: https://issues.apache.org/jira/browse/HADOOP-8185
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Attachments: HADOOP-8185.patch, HADOOP-8185.patch, HADOOP-8185.patch, 
> HADOOP-8185.patch
>
>
> documentation changes related to HDFS-3094

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8185) Update namenode -format documentation and add -nonInteractive and -force

2012-05-02 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266784#comment-13266784
 ] 

Aaron T. Myers commented on HADOOP-8185:


Thanks a lot for the reminder, Arpit.

+1, the latest patch looks good to me. I'll commit this momentarily.

> Update namenode -format documentation and add -nonInteractive and -force
> 
>
> Key: HADOOP-8185
> URL: https://issues.apache.org/jira/browse/HADOOP-8185
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.24.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Attachments: HADOOP-8185.patch, HADOOP-8185.patch, HADOOP-8185.patch, 
> HADOOP-8185.patch
>
>
> documentation changes related to HDFS-3094

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8336) LocalFileSystem Does not seek to the correct location when Checksumming is off.

2012-05-02 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266777#comment-13266777
 ] 

Elliott Clark commented on HADOOP-8336:
---

We were seeing this when reading 64k blocks.  So the file was opened.  Then 
read 64k.  Then seek to 128k then try and read.

I'll try and come back to this with a test or more detailed info in a little 
bit.

> LocalFileSystem Does not seek to the correct location when Checksumming is 
> off.
> ---
>
> Key: HADOOP-8336
> URL: https://issues.apache.org/jira/browse/HADOOP-8336
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Elliott Clark
>Assignee: Todd Lipcon
> Attachments: branch-1-test.txt
>
>
> Hbase was seeing an issue when trying to read data from a local filesystem 
> instance with setVerifyChecksum(false).  On debugging into it, the seek on 
> the file was seeking to the checksum block index, but since checksumming was 
> off that was the incorrect location.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Attachment: (was: HADOOP-8214.patch.txt)

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Attachment: HADOOP-8214.patch.txt

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Attachment: HADOOP-8214.patch.txt

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Attachment: (was: HADOOP-8214.patch.txt)

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Status: Patch Available  (was: Open)

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Attachment: HADOOP-8214.patch.txt

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8185) Update namenode -format documentation and add -nonInteractive and -force

2012-05-02 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266719#comment-13266719
 ] 

Arpit Gupta commented on HADOOP-8185:
-

Aaron HDFS-3094 is committed to trunk, when you get some time if you can review 
and commit the doc changes if it looks good.

> Update namenode -format documentation and add -nonInteractive and -force
> 
>
> Key: HADOOP-8185
> URL: https://issues.apache.org/jira/browse/HADOOP-8185
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.24.0
>Reporter: Arpit Gupta
>Assignee: Arpit Gupta
> Attachments: HADOOP-8185.patch, HADOOP-8185.patch, HADOOP-8185.patch, 
> HADOOP-8185.patch
>
>
> documentation changes related to HDFS-3094

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8230) Enable sync by default and disable append

2012-05-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266700#comment-13266700
 ] 

Suresh Srinivas commented on HADOOP-8230:
-

bq. Wrt #2 personally I don't think we should allow people to disable durable 
sync as that can result in data loss for people running HBase. See HADOOP-8230 
for more info. I'm open to having an option to disable durable sync if you 
think that use case is important.
There are installations where HBase is not used and sync was disabled. Now this 
patch has removed that option. When an installation upgrades to a release with 
this patch, suddenly sync is enabled and there is no way to disable it.

bq. (1) there are tests that are using append not to test append per se but for 
the side effects and we'd lose sync test coverage by removing those tests and 
(2) per the description we're keeping the append code path in case someone 
wants to fix the data loss issues in which case it makes sense to keep the test 
coverage as well.
For testing sync, with this patch, since it is enabled by default, you do not 
need the flag right?

> Enable sync by default and disable append
> -
>
> Key: HADOOP-8230
> URL: https://issues.apache.org/jira/browse/HADOOP-8230
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 1.1.0
>
> Attachments: hadoop-8230.txt
>
>
> Per HDFS-3120 for 1.x let's:
> - Always enable the sync path, which is currently only enabled if 
> dfs.support.append is set
> - Remove the dfs.support.append configuration option. We'll keep the code 
> paths though in case we ever fix append on branch-1, in which case we can add 
> the config option back

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8279) Auto-HA: Allow manual failover to be invoked from zkfc.

2012-05-02 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266698#comment-13266698
 ] 

Aaron T. Myers commented on HADOOP-8279:


+1, the updated patch looks good to me. Thanks a lot for addressing my 
feedback, Todd.

> Auto-HA: Allow manual failover to be invoked from zkfc.
> ---
>
> Key: HADOOP-8279
> URL: https://issues.apache.org/jira/browse/HADOOP-8279
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: auto-failover, ha
>Affects Versions: Auto Failover (HDFS-3042)
>Reporter: Mingjie Lai
>Assignee: Todd Lipcon
> Fix For: Auto Failover (HDFS-3042)
>
> Attachments: hadoop-8279.txt, hadoop-8279.txt, hadoop-8279.txt, 
> hadoop-8279.txt, hadoop-8279.txt
>
>
> HADOOP-8247 introduces a configure flag to prevent potential status 
> inconsistency between zkfc and namenode, by making auto and manual failover 
> mutually exclusive.
> However, as described in 2.7.2 section of design doc at HDFS-2185, we should 
> allow manual and auto failover co-exist, by:
> - adding some rpc interfaces at zkfc
> - manual failover shall be triggered by haadmin, and handled by zkfc if auto 
> failover is enabled. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8327) distcpv2 and distcpv1 jars should not coexist

2012-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266646#comment-13266646
 ] 

Hadoop QA commented on HADOOP-8327:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525302/HADOOP-8327.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

-1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 2 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in hadoop-tools/hadoop-extras.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/920//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/920//artifact/trunk/trunk/patchprocess/newPatchFindbugsWarningshadoop-extras.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/920//console

This message is automatically generated.

> distcpv2 and distcpv1 jars should not coexist
> -
>
> Key: HADOOP-8327
> URL: https://issues.apache.org/jira/browse/HADOOP-8327
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.2
>Reporter: Dave Thompson
>Assignee: Dave Thompson
> Attachments: HADOOP-8327-branch-0.23.2.patch, HADOOP-8327.patch
>
>
> Distcp v2 (hadoop-tools/hadoop-distcp/...)and Distcp v1 
> (hadoop-tools/hadoop-extras/...) are currently both built, and the resulting 
> hadoop-distcp-x.jar and hadoop-extras-x.jar end up in the same class path 
> directory.   This causes some undeterministic problems, where v1 is launched 
> when v2 is intended, or even v2 is launched, but may later fail on various 
> nodes because of mismatch with v1.
> According to
> http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html 
> ("Understanding class path wildcards")
> "The order in which the JAR files in a directory are enumerated in the 
> expanded class path is not specified and may vary from platform to platform 
> and even from moment to moment on the same machine."
> Suggest distcpv1 be deprecated at this point, possibly by discontinuing build 
> of distcpv1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8327) distcpv2 and distcpv1 jars should not coexist

2012-05-02 Thread Dave Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Thompson updated HADOOP-8327:
--

Attachment: HADOOP-8327.patch

Though the intent is for the fix is branch 0.23.2, I'm attaching a trunk patch 
now as it has been speculated that the auto patch is testing to trunk rather 
than 0.23.2 as the attached patch name specifies.

> distcpv2 and distcpv1 jars should not coexist
> -
>
> Key: HADOOP-8327
> URL: https://issues.apache.org/jira/browse/HADOOP-8327
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.2
>Reporter: Dave Thompson
>Assignee: Dave Thompson
> Attachments: HADOOP-8327-branch-0.23.2.patch, HADOOP-8327.patch
>
>
> Distcp v2 (hadoop-tools/hadoop-distcp/...)and Distcp v1 
> (hadoop-tools/hadoop-extras/...) are currently both built, and the resulting 
> hadoop-distcp-x.jar and hadoop-extras-x.jar end up in the same class path 
> directory.   This causes some undeterministic problems, where v1 is launched 
> when v2 is intended, or even v2 is launched, but may later fail on various 
> nodes because of mismatch with v1.
> According to
> http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html 
> ("Understanding class path wildcards")
> "The order in which the JAR files in a directory are enumerated in the 
> expanded class path is not specified and may vary from platform to platform 
> and even from moment to moment on the same machine."
> Suggest distcpv1 be deprecated at this point, possibly by discontinuing build 
> of distcpv1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8327) distcpv2 and distcpv1 jars should not coexist

2012-05-02 Thread Dave Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266618#comment-13266618
 ] 

Dave Thompson commented on HADOOP-8327:
---

I don't seem to have any visibility as to why the above auto patching isn't 
succeeding.   Patch looks good to me. 

> distcpv2 and distcpv1 jars should not coexist
> -
>
> Key: HADOOP-8327
> URL: https://issues.apache.org/jira/browse/HADOOP-8327
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.2
>Reporter: Dave Thompson
>Assignee: Dave Thompson
> Attachments: HADOOP-8327-branch-0.23.2.patch
>
>
> Distcp v2 (hadoop-tools/hadoop-distcp/...)and Distcp v1 
> (hadoop-tools/hadoop-extras/...) are currently both built, and the resulting 
> hadoop-distcp-x.jar and hadoop-extras-x.jar end up in the same class path 
> directory.   This causes some undeterministic problems, where v1 is launched 
> when v2 is intended, or even v2 is launched, but may later fail on various 
> nodes because of mismatch with v1.
> According to
> http://docs.oracle.com/javase/6/docs/technotes/tools/windows/classpath.html 
> ("Understanding class path wildcards")
> "The order in which the JAR files in a directory are enumerated in the 
> expanded class path is not specified and may vary from platform to platform 
> and even from moment to moment on the same machine."
> Suggest distcpv1 be deprecated at this point, possibly by discontinuing build 
> of distcpv1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266614#comment-13266614
 ] 

Hadoop QA commented on HADOOP-8319:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12525299/HADOOP-8319.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

-1 javadoc.  The javadoc tool appears to have generated -6 warning messages.

-1 javac.  The patch appears to cause tar ant target to fail.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to cause Findbugs (version 1.3.9) to fail.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/919//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/919//console

This message is automatically generated.

> FileContext does not support setWriteChecksum
> -
>
> Key: HADOOP-8319
> URL: https://issues.apache.org/jira/browse/HADOOP-8319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch, 
> HADOOP-8319.patch
>
>
> File Context does not support setWriteChecksum and hence users trying
> to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-02 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8319:


Attachment: HADOOP-8319.patch

The following is what I get when I run from the root directory on my trunk with 
this patch. This is the same when I run without my patch.

{code}
-1 overall.  

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated 18 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 20 new Findbugs (version 
1.3.9) warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.
{code}

> FileContext does not support setWriteChecksum
> -
>
> Key: HADOOP-8319
> URL: https://issues.apache.org/jira/browse/HADOOP-8319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch, 
> HADOOP-8319.patch
>
>
> File Context does not support setWriteChecksum and hence users trying
> to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-02 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8319:


Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Patch Available  (was: Open)

> FileContext does not support setWriteChecksum
> -
>
> Key: HADOOP-8319
> URL: https://issues.apache.org/jira/browse/HADOOP-8319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch, 
> HADOOP-8319.patch
>
>
> File Context does not support setWriteChecksum and hence users trying
> to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8319) FileContext does not support setWriteChecksum

2012-05-02 Thread John George (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8319:


Target Version/s: 2.0.0, 3.0.0  (was: 3.0.0, 2.0.0)
  Status: Open  (was: Patch Available)

> FileContext does not support setWriteChecksum
> -
>
> Key: HADOOP-8319
> URL: https://issues.apache.org/jira/browse/HADOOP-8319
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.0, 3.0.0
>Reporter: John George
>Assignee: John George
> Attachments: HADOOP-8319.patch, HADOOP-8319.patch, HADOOP-8319.patch, 
> HADOOP-8319.patch
>
>
> File Context does not support setWriteChecksum and hence users trying
> to use this functionality fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8275) Range check DelegationKey length

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266552#comment-13266552
 ] 

Hudson commented on HADOOP-8275:


Integrated in Hadoop-Mapreduce-trunk #1067 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1067/])
HADOOP-8275. Range check DelegationKey length. Contributed by Colin Patrick 
McCabe (Revision 1332839)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1332839
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationKey.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestWritableUtils.java


> Range check DelegationKey length 
> -
>
> Key: HADOOP-8275
> URL: https://issues.apache.org/jira/browse/HADOOP-8275
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HADOOP-8275.001.patch, HADOOP-8275.002.patch, 
> HADOOP-8275.003.patch
>
>
> Harden serialization logic against malformed or malicious input.
> Add range checking to readVInt, to detect overflows, underflows, and 
> larger-than-expected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8172) Configuration no longer sets all keys in a deprecated key list.

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266553#comment-13266553
 ] 

Hudson commented on HADOOP-8172:


Integrated in Hadoop-Mapreduce-trunk #1067 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1067/])
HADOOP-8172. Configuration no longer sets all keys in a deprecated key 
list. (Anupam Seth via bobby) (Revision 1332821)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1332821
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestDeprecatedKeys.java


> Configuration no longer sets all keys in a deprecated key list.
> ---
>
> Key: HADOOP-8172
> URL: https://issues.apache.org/jira/browse/HADOOP-8172
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.23.3, 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Anupam Seth
>Priority: Critical
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HADOOP-8172-branch-2.patch, HADOOP-8172-branch-2.patch
>
>
> I did not look at the patch for HADOOP-8167 previously, but I did in response 
> to a recent test failure. The patch appears to have changed the following 
> code (I am just paraphrasing the code)
> {code}
> if(!deprecated(key)) {
>   set(key, value);
> } else {
>   for(String newKey: depricatedKeyMap.get(key)) {
> set(newKey, value);
>   }
> }
> {code}
> to be 
> {code}
> set(key, value);
> if(depricatedKeyMap.contains(key)) {
>set(deprecatedKeyMap.get(key)[0], value);
> } else if(reverseKeyMap.contains(key)) {
>set(reverseKeyMap.get(key), value);
> }
> {code}
> If a key is deprecated and is mapped to more then one new key value only the 
> first one in the list will be set, where as previously all of them would be 
> set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266549#comment-13266549
 ] 

Hudson commented on HADOOP-8339:


Integrated in Hadoop-Mapreduce-trunk #1067 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1067/])
HADOOP-8339. jenkins complaining about 16 javadoc warnings (Tom White and 
Robert Evans via tgraves) (Revision 1332853)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1332853
Files : 
* /hadoop/common/trunk/dev-support/test-patch.properties
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/dev-support/test-patch.properties
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test-patch.properties
* /hadoop/common/trunk/hadoop-hdfs-project/dev-support/test-patch.properties
* 
/hadoop/common/trunk/hadoop-mapreduce-project/dev-support/test-patch.properties
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/mapred/tools/package-info.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/CurrentJHParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/package-info.java


> jenkins complaining about 16 javadoc warnings 
> --
>
> Key: HADOOP-8339
> URL: https://issues.apache.org/jira/browse/HADOOP-8339
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Thomas Graves
>Assignee: Robert Joseph Evans
> Fix For: 3.0.0
>
> Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt, 
> HADOOP-8339.txt
>
>
> See any of the mapreduce/hadoop jenkins reports recently and they all 
> complain about 16 javadoc warnings.
> -1 javadoc.  The javadoc tool appears to have generated 16 warning 
> messages.
> Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8317) Update maven-assembly-plugin to 2.3 - fix build on FreeBSD

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266551#comment-13266551
 ] 

Hudson commented on HADOOP-8317:


Integrated in Hadoop-Mapreduce-trunk #1067 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1067/])
HADOOP-8317. Update maven-assembly-plugin to 2.3 - fix build on FreeBSD 
(Radim Kolar via bobby) (Revision 1332775)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1332775
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/pom.xml


> Update maven-assembly-plugin to 2.3 - fix build on FreeBSD
> --
>
> Key: HADOOP-8317
> URL: https://issues.apache.org/jira/browse/HADOOP-8317
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.3, 2.0.0
> Environment: FreeBSD 8.2, AMD 64, OPENJDK 6, ZFS
>Reporter: Radim Kolar
> Fix For: 2.0.0, 3.0.0
>
> Attachments: assembly-plugin-update.txt
>
>
> There is bug in hadoop-assembly plugin which makes builds fail on FreeBSD 
> because its chmod do not understand nonstgandard linux parameters. Unless you 
> do mvn clean before every build it fails with:
> [INFO] --- maven-assembly-plugin:2.2.1:single (dist) @ hadoop-common ---
> [WARNING] The following patterns were never triggered in this artifact 
> exclusion filter:
> o  'org.apache.ant:*:jar'
> o  'jdiff:jdiff:jar'
> [INFO] Copying files to 
> /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT
> [WARNING] ---
> [WARNING] Standard error:
> [WARNING] ---
> [WARNING] 
> [WARNING] ---
> [WARNING] Standard output:
> [WARNING] ---
> [WARNING] chmod: 
> /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-0.23.3-SNAPSHOT.jar:
>  Inappropriate file type or format
> [WARNING] ---
> mojoFailed org.apache.maven.plugins:maven-assembly-plugin:2.2.1(dist)
> projectFailed org.apache.hadoop:hadoop-common:0.23.3-SNAPSHOT
> sessionEnded

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8172) Configuration no longer sets all keys in a deprecated key list.

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266535#comment-13266535
 ] 

Hudson commented on HADOOP-8172:


Integrated in Hadoop-Hdfs-trunk #1032 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1032/])
HADOOP-8172. Configuration no longer sets all keys in a deprecated key 
list. (Anupam Seth via bobby) (Revision 1332821)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1332821
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestDeprecatedKeys.java


> Configuration no longer sets all keys in a deprecated key list.
> ---
>
> Key: HADOOP-8172
> URL: https://issues.apache.org/jira/browse/HADOOP-8172
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.23.3, 0.24.0
>Reporter: Robert Joseph Evans
>Assignee: Anupam Seth
>Priority: Critical
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HADOOP-8172-branch-2.patch, HADOOP-8172-branch-2.patch
>
>
> I did not look at the patch for HADOOP-8167 previously, but I did in response 
> to a recent test failure. The patch appears to have changed the following 
> code (I am just paraphrasing the code)
> {code}
> if(!deprecated(key)) {
>   set(key, value);
> } else {
>   for(String newKey: depricatedKeyMap.get(key)) {
> set(newKey, value);
>   }
> }
> {code}
> to be 
> {code}
> set(key, value);
> if(depricatedKeyMap.contains(key)) {
>set(deprecatedKeyMap.get(key)[0], value);
> } else if(reverseKeyMap.contains(key)) {
>set(reverseKeyMap.get(key), value);
> }
> {code}
> If a key is deprecated and is mapped to more then one new key value only the 
> first one in the list will be set, where as previously all of them would be 
> set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8275) Range check DelegationKey length

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266534#comment-13266534
 ] 

Hudson commented on HADOOP-8275:


Integrated in Hadoop-Hdfs-trunk #1032 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1032/])
HADOOP-8275. Range check DelegationKey length. Contributed by Colin Patrick 
McCabe (Revision 1332839)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1332839
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/WritableUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationKey.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestWritableUtils.java


> Range check DelegationKey length 
> -
>
> Key: HADOOP-8275
> URL: https://issues.apache.org/jira/browse/HADOOP-8275
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HADOOP-8275.001.patch, HADOOP-8275.002.patch, 
> HADOOP-8275.003.patch
>
>
> Harden serialization logic against malformed or malicious input.
> Add range checking to readVInt, to detect overflows, underflows, and 
> larger-than-expected values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8317) Update maven-assembly-plugin to 2.3 - fix build on FreeBSD

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266533#comment-13266533
 ] 

Hudson commented on HADOOP-8317:


Integrated in Hadoop-Hdfs-trunk #1032 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1032/])
HADOOP-8317. Update maven-assembly-plugin to 2.3 - fix build on FreeBSD 
(Radim Kolar via bobby) (Revision 1332775)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1332775
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/pom.xml


> Update maven-assembly-plugin to 2.3 - fix build on FreeBSD
> --
>
> Key: HADOOP-8317
> URL: https://issues.apache.org/jira/browse/HADOOP-8317
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.3, 2.0.0
> Environment: FreeBSD 8.2, AMD 64, OPENJDK 6, ZFS
>Reporter: Radim Kolar
> Fix For: 2.0.0, 3.0.0
>
> Attachments: assembly-plugin-update.txt
>
>
> There is bug in hadoop-assembly plugin which makes builds fail on FreeBSD 
> because its chmod do not understand nonstgandard linux parameters. Unless you 
> do mvn clean before every build it fails with:
> [INFO] --- maven-assembly-plugin:2.2.1:single (dist) @ hadoop-common ---
> [WARNING] The following patterns were never triggered in this artifact 
> exclusion filter:
> o  'org.apache.ant:*:jar'
> o  'jdiff:jdiff:jar'
> [INFO] Copying files to 
> /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT
> [WARNING] ---
> [WARNING] Standard error:
> [WARNING] ---
> [WARNING] 
> [WARNING] ---
> [WARNING] Standard output:
> [WARNING] ---
> [WARNING] chmod: 
> /usr/local/jboss/.jenkins/jobs/Hadoop-0.23/workspace/hadoop-common-project/hadoop-common/target/hadoop-common-0.23.3-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-0.23.3-SNAPSHOT.jar:
>  Inappropriate file type or format
> [WARNING] ---
> mojoFailed org.apache.maven.plugins:maven-assembly-plugin:2.2.1(dist)
> projectFailed org.apache.hadoop:hadoop-common:0.23.3-SNAPSHOT
> sessionEnded

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8339) jenkins complaining about 16 javadoc warnings

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266531#comment-13266531
 ] 

Hudson commented on HADOOP-8339:


Integrated in Hadoop-Hdfs-trunk #1032 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1032/])
HADOOP-8339. jenkins complaining about 16 javadoc warnings (Tom White and 
Robert Evans via tgraves) (Revision 1332853)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1332853
Files : 
* /hadoop/common/trunk/dev-support/test-patch.properties
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/dev-support/test-patch.properties
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/FenceMethod.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HAServiceProtocol.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/test-patch.properties
* /hadoop/common/trunk/hadoop-hdfs-project/dev-support/test-patch.properties
* 
/hadoop/common/trunk/hadoop-mapreduce-project/dev-support/test-patch.properties
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/mapred/tools/package-info.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/CurrentJHParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/package-info.java


> jenkins complaining about 16 javadoc warnings 
> --
>
> Key: HADOOP-8339
> URL: https://issues.apache.org/jira/browse/HADOOP-8339
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Thomas Graves
>Assignee: Robert Joseph Evans
> Fix For: 3.0.0
>
> Attachments: HADOOP-8339.patch, HADOOP-8339.txt, HADOOP-8339.txt, 
> HADOOP-8339.txt
>
>
> See any of the mapreduce/hadoop jenkins reports recently and they all 
> complain about 16 javadoc warnings.
> -1 javadoc.  The javadoc tool appears to have generated 16 warning 
> messages.
> Which really means there are 24 since there are 8 that are supposed to be OK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8104) Inconsistent Jackson versions

2012-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266519#comment-13266519
 ] 

Hudson commented on HADOOP-8104:


Integrated in Hadoop-Hdfs-0.23-Build #245 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/245/])
svn merge -c 1294784 FIXES: HADOOP-8104. Inconsistent Jackson versions 
(tucu) (Revision 1332802)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1332802
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-project/pom.xml


> Inconsistent Jackson versions
> -
>
> Key: HADOOP-8104
> URL: https://issues.apache.org/jira/browse/HADOOP-8104
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.3
>Reporter: Colin Patrick McCabe
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.3, 2.0.0
>
> Attachments: HADOOP-7470.patch, HADOOP-8104.patch, HADOOP-8104.patch, 
> dependency-tree-old.txt
>
>
> This is a maven build issue.
> Jersey 1.8 is pulling in version 1.7.1 of Jackson.  Meanwhile, we are 
> manually specifying that we want version 1.8 of Jackson in the POM files.  
> This causes a conflict where Jackson produces unexpected results when 
> serializing Map objects.
> How to reproduce: try this code:
> {quote}
> ObjectMapper mapper = new ObjectMapper();
>  Map m = new HashMap();
> mapper.writeValue(new File("foo"), m);
> {quote}
> You will get an exception:
> {quote}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> org.codehaus.jackson.type.JavaType.isMapLikeType()Z
> at 
> org.codehaus.jackson.map.ser.BasicSerializerFactory.buildContainerSerializer(BasicSerializerFactory.java:396)
> at 
> org.codehaus.jackson.map.ser.BeanSerializerFactory.createSerializer(BeanSerializerFactory.java:267)
> {quote}
> Basically the inconsistent versions of various Jackson components are causing 
> this NoSuchMethod error.
> As far as I know, this only occurs when serializing maps-- that's why it 
> hasn't been found and fixed yet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8342) HDFS command fails with exception following merge of HADOOP-8325

2012-05-02 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266457#comment-13266457
 ] 

Aaron T. Myers commented on HADOOP-8342:


+1, I tried the patch and observed that it fixes the issue.

> HDFS command fails with exception following merge of HADOOP-8325
> 
>
> Key: HADOOP-8342
> URL: https://issues.apache.org/jira/browse/HADOOP-8342
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.0
> Environment: QE tests on version 2.0.1205010603
>Reporter: Randy Clayton
>Assignee: Alejandro Abdelnur
> Attachments: HDFS-3337.patch
>
>
> We are seeing most hdfs commands in our nightly acceptance tests fail with an 
> exception as shown below. This started with a few hours of the merge of 
> HADOOP-8325 on 4/30/2012
> hdfs --config conf/hadoop/ dfs -ls dirname
> ls: `dirname': No such file or directory
> 12/05/01 16:57:52 WARN util.ShutdownHookManager: ShutdownHook 
> 'ClientFinalizer' failed, java.lang.IllegalStateException: Shutdown in 
> progress, cannot remove a shutdownHook
> java.lang.IllegalStateException: Shutdown in progress, cannot remove a 
> shutdownHook
>   at 
> org.apache.hadoop.util.ShutdownHookManager.removeShutdownHook(ShutdownHookManager.java:166)
>   at org.apache.hadoop.fs.FileSystem$Cache.remove(FileSystem.java:2202)
>   at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2231)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2251)
>   at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8343) Allow configuration of authorization for JmxJsonServlet and MetricsServlet

2012-05-02 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266425#comment-13266425
 ] 

Aaron T. Myers commented on HADOOP-8343:


+1, the patch looks good to me.

> Allow configuration of authorization for JmxJsonServlet and MetricsServlet
> --
>
> Key: HADOOP-8343
> URL: https://issues.apache.org/jira/browse/HADOOP-8343
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.0.0
>Reporter: Philip Zeyliger
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-8343.patch
>
>
> When using authorization for the daemons' web server, it would be useful to 
> specifically control the authorization requirements for accessing /jmx and 
> /metrics.  Currently, they require administrative access.  This JIRA would 
> propose that whether or not they are available to administrators only or to 
> all users be controlled by "hadoop.instrumentation.requires.administrator" 
> (or similar).  The default would be that administrator access is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >