[jira] [Commented] (HADOOP-9446) Support Kerberos HTTP SPNEGO authentication for non-SUN JDK

2013-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13737890#comment-13737890
 ] 

Hadoop QA commented on HADOOP-9446:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12597665/HADOOP-9446-v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2973//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2973//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2973//console

This message is automatically generated.

 Support Kerberos HTTP SPNEGO authentication for non-SUN JDK
 ---

 Key: HADOOP-9446
 URL: https://issues.apache.org/jira/browse/HADOOP-9446
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Yu Gao
Assignee: Yu Gao
 Attachments: HADOOP-9446-branch-2.patch, 
 HADOOP-9446-branch-2-v2.patch, HADOOP-9446.patch, HADOOP-9446-v2.patch, 
 TestKerberosHttpSPNEGO.java, 
 TEST-org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.xml,
  
 TEST-org.apache.hadoop.security.authentication.server.TestKerberosAuthenticationHandler.xml


 Class KerberosAuthenticator and KerberosAuthenticationHandler currently only 
 support running with SUN JDK when Kerberos is enabled. In order to support  
 alternative JDKs like IBM JDK which has different options supported by 
 Krb5LoginModule and different login module classes, the HTTP Kerberos 
 authentication classes need to be changed.
 In addition, NT_GSS_KRB5_PRINCIPAL, which is used in KerberosAuthenticator to 
 get the corresponding oid instance, is a field defined in SUN JDK, but not in 
 IBM JDK.
 This JIRA is to fix the existing problems and add support for Kerberos HTTP 
 SPNEGO authentication with non-SUN JDK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9863) Snappy compression in branch1 cannot work on Big-Endian 64 bit platform w/o backporting HADOOP-8686

2013-08-13 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13737912#comment-13737912
 ] 

Yu Li commented on HADOOP-9863:
---

Since this is a branch-1 patch, seems Hadoop QA has some problem testing it.

Here is the result of test-patch on my local env:

  {color:red}-1 overall.{color}

  {color:green}+1 @author.{color}  The patch does not contain any @author 
tags.

  {color:red}-1 tests included.{color}  The patch doesn't appear to include 
any new or modified tests.
  Please justify why no tests are needed for this patch.

  {color:green}+1 javadoc.{color}  The javadoc tool did not generate any 
warning messages.

  {color:green}+1 javac.{color}  The applied patch does not increase the 
total number of javac compiler warnings.

  {color:red}-1 findbugs.{color}  The patch appears to introduce 239 new 
Findbugs (version 2.0.1) warnings.


 Snappy compression in branch1 cannot work on Big-Endian 64 bit platform w/o 
 backporting HADOOP-8686
 ---

 Key: HADOOP-9863
 URL: https://issues.apache.org/jira/browse/HADOOP-9863
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 1.1.1, 1.1.2, 1.2.1
Reporter: Yu Li
Assignee: Yu Li
  Labels: native, ppc64, snappy
 Attachments: HADOOP-9863.patch


 w/o changes made in HADOOP-8686 on SnappyCompressor.c, snappy compression in 
 branch-1 hadoop on big endian 64 bit platform (ppc64 for example) will 
 generate incorrect .snappy (almost-empty) file because of type casting from 
 size_t to jint. Will include more detailed analysis in comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9306) Refactor UserGroupInformation to reduce branching for multi-platform support

2013-08-13 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated HADOOP-9306:
---

Assignee: (was: Sandy Ryza)

 Refactor UserGroupInformation to reduce branching for multi-platform support
 

 Key: HADOOP-9306
 URL: https://issues.apache.org/jira/browse/HADOOP-9306
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Aaron T. Myers

 Per Tucu's comment on HADOOP-9305, we can refactor the code for conditionally 
 loading classes based on the OS version, JRE version, and bitmode to use a 
 map and struct. Seems like good cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9306) Refactor UserGroupInformation to reduce branching for multi-platform support

2013-08-13 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated HADOOP-9306:
---

Labels: newbie  (was: )

 Refactor UserGroupInformation to reduce branching for multi-platform support
 

 Key: HADOOP-9306
 URL: https://issues.apache.org/jira/browse/HADOOP-9306
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Aaron T. Myers
  Labels: newbie

 Per Tucu's comment on HADOOP-9305, we can refactor the code for conditionally 
 loading classes based on the OS version, JRE version, and bitmode to use a 
 map and struct. Seems like good cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9863) Snappy compression in branch1 cannot work on Big-Endian 64 bit platform w/o backporting HADOOP-8686

2013-08-13 Thread Tian Hong Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13737963#comment-13737963
 ] 

Tian Hong Wang commented on HADOOP-9863:


Yu, can you port this patch to trunk or branch2?

 Snappy compression in branch1 cannot work on Big-Endian 64 bit platform w/o 
 backporting HADOOP-8686
 ---

 Key: HADOOP-9863
 URL: https://issues.apache.org/jira/browse/HADOOP-9863
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 1.1.1, 1.1.2, 1.2.1
Reporter: Yu Li
Assignee: Yu Li
  Labels: native, ppc64, snappy
 Attachments: HADOOP-9863.patch


 w/o changes made in HADOOP-8686 on SnappyCompressor.c, snappy compression in 
 branch-1 hadoop on big endian 64 bit platform (ppc64 for example) will 
 generate incorrect .snappy (almost-empty) file because of type casting from 
 size_t to jint. Will include more detailed analysis in comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9848) Create a MiniKDC for use with security testing

2013-08-13 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13737997#comment-13737997
 ] 

Wei Yan commented on HADOOP-9848:
-

[~szetszwo]
The two javadoc warnings are not related to this patch. 
{code}
[WARNING] 
org/apache/directory/api/ldap/model/name/Dn.class(org/apache/directory/api/ldap/model/name:Dn.class):
 warning: Cannot find annotation method
 'value()' in type 'edu.umd.cs.findbugs.annotations.SuppressWarnings': class 
file for edu.umd.cs.findbugs.annotations.SuppressWarnings not found
[WARNING] 
org/apache/directory/api/ldap/model/name/Dn.class(org/apache/directory/api/ldap/model/name:Dn.class):
 warning: Cannot find annotation method 
'justification()' in type 'edu.umd.cs.findbugs.annotations.SuppressWarnings'
{code}

This patch uses class Dn from ApacheDS, but doesn't use findbugs-annotations. 
So I don't include findbugs-annotations in this patch. Actually, if we import 
findbugs-annotations here, it would introduce javac warnings 
(https://issues.apache.org/jira/browse/HADOOP-9848?focusedCommentId=13735059page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13735059)
 because of unavailable jar package checksum.

This javadoc warnings also discussed in YARN-107, YARN-643.

 Create a MiniKDC for use with security testing
 --

 Key: HADOOP-9848
 URL: https://issues.apache.org/jira/browse/HADOOP-9848
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security, test
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: HADOOP-9848.patch, HADOOP-9848.patch, HADOOP-9848.patch, 
 HADOOP-9848.patch, HADOOP-9848.patch


 Create a MiniKDC using Apache Directory Server. MiniKDC builds an embedded 
 KDC (key distribution center), and allows to create principals and keytabs on 
 the fly. MiniKDC can be integrated for Hadoop security unit testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9845) Update protobuf to 2.5 from 2.4.x

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738085#comment-13738085
 ] 

Hudson commented on HADOOP-9845:


SUCCESS: Integrated in Hadoop-Yarn-trunk #300 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/300/])
HADOOP-9845. Update protobuf to 2.5 from 2.4.x. (tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513281)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml


 Update protobuf to 2.5 from 2.4.x
 -

 Key: HADOOP-9845
 URL: https://issues.apache.org/jira/browse/HADOOP-9845
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Affects Versions: 2.0.5-alpha
Reporter: stack
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: HADOOP-9845.patch, HADOOP-9845.patch


 protobuf 2.5 is a bit faster with a new Parse to avoid a builder step and a 
 few other goodies that we'd like to take advantage of over in hbase 
 especially now we are all pb all the time.  Unfortunately the protoc 
 generated files are no longer compatible w/ 2.4.1 generated files.  Hadoop 
 uses 2.4.1 pb.  This latter fact makes it so we cannot upgrade until hadoop 
 does.
 This issue suggests hadoop2 move to protobuf 2.5.
 I can do the patch no prob. if there is interest.
 (When we upgraded our build broke with complaints like the below:
 {code}
 java.lang.UnsupportedOperationException: This is supposed to be overridden by 
 subclasses.
   at 
 com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetDatanodeReportRequestProto.getSerializedSize(ClientNamenodeProtocolProtos.java:21566)
   at 
 com.google.protobuf.AbstractMessageLite.toByteString(AbstractMessageLite.java:49)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.constructRpcRequest(ProtobufRpcEngine.java:149)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:193)
   at com.sun.proxy.$Proxy14.getDatanodeReport(Unknown Source)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
   at com.sun.proxy.$Proxy14.getDatanodeReport(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDatanodeReport(ClientNamenodeProtocolTranslatorPB.java:488)
   at org.apache.hadoop.hdfs.DFSClient.datanodeReport(DFSClient.java:1887)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1798
 ...
 {code}
 More over in HBASE-8165 if interested.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9848) Create a MiniKDC for use with security testing

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738084#comment-13738084
 ] 

Hudson commented on HADOOP-9848:


SUCCESS: Integrated in Hadoop-Yarn-trunk #300 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/300/])
HADOOP-9848. Create a MiniKDC for use with security testing. (ywskycn via tucu) 
(tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513308)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared/keytab
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared/keytab/HackedKeytab.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared/keytab/HackedKeytabEncoder.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/KerberosSecurityTestcase.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources/log4j.properties
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources/minikdc-krb5.conf
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources/minikdc.ldiff
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc/TestMiniKdc.java
* /hadoop/common/trunk/hadoop-common-project/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml


 Create a MiniKDC for use with security testing
 --

 Key: HADOOP-9848
 URL: https://issues.apache.org/jira/browse/HADOOP-9848
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security, test
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: HADOOP-9848.patch, HADOOP-9848.patch, HADOOP-9848.patch, 
 HADOOP-9848.patch, HADOOP-9848.patch


 Create a MiniKDC using Apache Directory Server. MiniKDC builds an embedded 
 KDC (key distribution center), and allows to create principals and keytabs on 
 the fly. MiniKDC can be integrated for Hadoop security unit testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738086#comment-13738086
 ] 

Hudson commented on HADOOP-9583:


SUCCESS: Integrated in Hadoop-Yarn-trunk #300 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/300/])
HADOOP-9583. test-patch gives +1 despite build failure when running tests. 
Contributed by Jason Lowe. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513200)
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Fix For: 3.0.0

 Attachments: HADOOP-9583.another_dummy.patch, 
 HADOOP-9583-dummy.patch, HADOOP-9583-dummy.patch, 
 HADOOP-9583-dummy-without-changes.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9847) TestGlobPath symlink tests fail to cleanup properly

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738083#comment-13738083
 ] 

Hudson commented on HADOOP-9847:


SUCCESS: Integrated in Hadoop-Yarn-trunk #300 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/300/])
Fix CHANGES.txt for HADOOP-9847 (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513252)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestGlobPath symlink tests fail to cleanup properly
 ---

 Key: HADOOP-9847
 URL: https://issues.apache.org/jira/browse/HADOOP-9847
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 3.0.0, 2.3.0

 Attachments: HADOOP-9847.001.patch


 On our internal trunk Jenkins runs, I've seen failures like the following:
 {noformat}
 Error Message:
 Cannot delete /user/jenkins. Name node is in safe mode. Resources are low on 
 NN. Please add or free up more resources then turn off safe mode manually. 
 NOTE:  If you turn off safe mode before adding resources, the NN will 
 immediately return to safe mode. Use hdfs dfsadmin -safemode leave to turn 
 safe mode off.  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3097)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3081)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:671)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:491)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48087)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2031)  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2027)  at 
 java.security.AccessController.doPrivileged(Native Method)  at 
 javax.security.auth.Subject.doAs(Subject.java:396)  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1493)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2025)
 Stack Trace:
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
  Cannot delete /user/jenkins. Name node is in safe mode.
 Resources are low on NN. Please add or free up more resources then turn off 
 safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
 the NN will immediately return to safe mode. Use hdfs dfsadmin -safemode 
 leave to turn safe mode off.
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3138)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3097)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3081)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:671)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:491)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48087)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2031)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2027)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1493)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2025)
 at org.apache.hadoop.ipc.Client.call(Client.java:1399)
 at org.apache.hadoop.ipc.Client.call(Client.java:1352)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
 at $Proxy15.delete(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
 at 
 

[jira] [Created] (HADOOP-9867) org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record delimiters well

2013-08-13 Thread Kris Geusebroek (JIRA)
Kris Geusebroek created HADOOP-9867:
---

 Summary: org.apache.hadoop.mapred.LineRecordReader does not handle 
multibyte record delimiters well
 Key: HADOOP-9867
 URL: https://issues.apache.org/jira/browse/HADOOP-9867
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.2
 Environment: CDH3U2 Redhat linux 5.7
Reporter: Kris Geusebroek


Having defined a recorddelimiter of multiple bytes in a new InputFileFormat 
sometimes has the effect of skipping records from the input.

This happens when the input splits are split off just after a recordseparator. 
Starting point for the next split would be non zero and skipFirstLine would be 
true. A seek into the file is done to start - 1 and the text until the first 
recorddelimiter is ignored (due to the presumption that this record is already 
handled by the previous maptask). Since the re ord delimiter is multibyte the 
seek only got the last byte of the delimiter into scope and its not recognized 
as a full delimiter. So the text is skipped until the next delimiter (ignoring 
a full record!!)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9867) org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record delimiters well

2013-08-13 Thread Kris Geusebroek (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738145#comment-13738145
 ] 

Kris Geusebroek commented on HADOOP-9867:
-

I created a Fix by adding the following code:

} else {
  if (start != 0) {
skipFirstLine = true;
+for (int i=0; i  recordDelimiter.length; i++) {
  --start;
+}
fileIn.seek(start);
  }

currently I'm testing this with a custom created subclass of LineRecordReader. 
If testing is OK, I'm willing to create a patch file if needed.

 org.apache.hadoop.mapred.LineRecordReader does not handle multibyte record 
 delimiters well
 --

 Key: HADOOP-9867
 URL: https://issues.apache.org/jira/browse/HADOOP-9867
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.20.2
 Environment: CDH3U2 Redhat linux 5.7
Reporter: Kris Geusebroek

 Having defined a recorddelimiter of multiple bytes in a new InputFileFormat 
 sometimes has the effect of skipping records from the input.
 This happens when the input splits are split off just after a 
 recordseparator. Starting point for the next split would be non zero and 
 skipFirstLine would be true. A seek into the file is done to start - 1 and 
 the text until the first recorddelimiter is ignored (due to the presumption 
 that this record is already handled by the previous maptask). Since the re 
 ord delimiter is multibyte the seek only got the last byte of the delimiter 
 into scope and its not recognized as a full delimiter. So the text is skipped 
 until the next delimiter (ignoring a full record!!)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738183#comment-13738183
 ] 

Hudson commented on HADOOP-9583:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1490 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1490/])
HADOOP-9583. test-patch gives +1 despite build failure when running tests. 
Contributed by Jason Lowe. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513200)
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Fix For: 3.0.0

 Attachments: HADOOP-9583.another_dummy.patch, 
 HADOOP-9583-dummy.patch, HADOOP-9583-dummy.patch, 
 HADOOP-9583-dummy-without-changes.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9847) TestGlobPath symlink tests fail to cleanup properly

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738180#comment-13738180
 ] 

Hudson commented on HADOOP-9847:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1490 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1490/])
Fix CHANGES.txt for HADOOP-9847 (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513252)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestGlobPath symlink tests fail to cleanup properly
 ---

 Key: HADOOP-9847
 URL: https://issues.apache.org/jira/browse/HADOOP-9847
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 3.0.0, 2.3.0

 Attachments: HADOOP-9847.001.patch


 On our internal trunk Jenkins runs, I've seen failures like the following:
 {noformat}
 Error Message:
 Cannot delete /user/jenkins. Name node is in safe mode. Resources are low on 
 NN. Please add or free up more resources then turn off safe mode manually. 
 NOTE:  If you turn off safe mode before adding resources, the NN will 
 immediately return to safe mode. Use hdfs dfsadmin -safemode leave to turn 
 safe mode off.  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3097)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3081)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:671)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:491)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48087)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2031)  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2027)  at 
 java.security.AccessController.doPrivileged(Native Method)  at 
 javax.security.auth.Subject.doAs(Subject.java:396)  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1493)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2025)
 Stack Trace:
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
  Cannot delete /user/jenkins. Name node is in safe mode.
 Resources are low on NN. Please add or free up more resources then turn off 
 safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
 the NN will immediately return to safe mode. Use hdfs dfsadmin -safemode 
 leave to turn safe mode off.
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3138)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3097)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3081)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:671)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:491)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48087)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2031)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2027)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1493)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2025)
 at org.apache.hadoop.ipc.Client.call(Client.java:1399)
 at org.apache.hadoop.ipc.Client.call(Client.java:1352)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
 at $Proxy15.delete(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
 at 
 

[jira] [Commented] (HADOOP-9848) Create a MiniKDC for use with security testing

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738181#comment-13738181
 ] 

Hudson commented on HADOOP-9848:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1490 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1490/])
HADOOP-9848. Create a MiniKDC for use with security testing. (ywskycn via tucu) 
(tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513308)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared/keytab
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared/keytab/HackedKeytab.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared/keytab/HackedKeytabEncoder.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/KerberosSecurityTestcase.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources/log4j.properties
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources/minikdc-krb5.conf
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources/minikdc.ldiff
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc/TestMiniKdc.java
* /hadoop/common/trunk/hadoop-common-project/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml


 Create a MiniKDC for use with security testing
 --

 Key: HADOOP-9848
 URL: https://issues.apache.org/jira/browse/HADOOP-9848
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security, test
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: HADOOP-9848.patch, HADOOP-9848.patch, HADOOP-9848.patch, 
 HADOOP-9848.patch, HADOOP-9848.patch


 Create a MiniKDC using Apache Directory Server. MiniKDC builds an embedded 
 KDC (key distribution center), and allows to create principals and keytabs on 
 the fly. MiniKDC can be integrated for Hadoop security unit testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9848) Create a MiniKDC for use with security testing

2013-08-13 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738209#comment-13738209
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-9848:


Hi Wei,

Thanks for checking it.  I understand that these two javadoc warnings may be 
unavoidable.  However, after the patch is committed, all the future builds will 
fail with javadoc warnings.  After this patch, all QA results will give -1 
javadoc. ... for any patch, for example [this 
one|https://issues.apache.org/jira/browse/HDFS-5089?focusedCommentId=13737796page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13737796].
  I believe it is not something you want.

Could we use some library other than Dn?  If there is really no way to fix it 
and we cannot use something else, we may increase OK_JAVADOC_WARNINGS, i.e.
{code}
--- dev-support/test-patch.sh   (revision 1513320)
+++ dev-support/test-patch.sh   (working copy)
@@ -426,7 +426,7 @@
   echo There appear to be $javadocWarnings javadoc warnings generated by the 
patched build.
 
   #There are 11 warnings that are caused by things that are caused by using 
sun internal APIs.
-  OK_JAVADOC_WARNINGS=11;
+  OK_JAVADOC_WARNINGS=13;
   ### if current warnings greater than OK_JAVADOC_WARNINGS
   if [[ $javadocWarnings -ne $OK_JAVADOC_WARNINGS ]] ; then
 JIRA_COMMENT=$JIRA_COMMENT
{code}

 Create a MiniKDC for use with security testing
 --

 Key: HADOOP-9848
 URL: https://issues.apache.org/jira/browse/HADOOP-9848
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security, test
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: HADOOP-9848.patch, HADOOP-9848.patch, HADOOP-9848.patch, 
 HADOOP-9848.patch, HADOOP-9848.patch


 Create a MiniKDC using Apache Directory Server. MiniKDC builds an embedded 
 KDC (key distribution center), and allows to create principals and keytabs on 
 the fly. MiniKDC can be integrated for Hadoop security unit testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-13 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-9868:
---

 Summary: Server must not advertise kerberos realm
 Key: HADOOP-9868
 URL: https://issues.apache.org/jira/browse/HADOOP-9868
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker


HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
the kerberos service principal realm.  SASL clients and servers do not support 
specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9789) Support server advertised kerberos principals

2013-08-13 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp resolved HADOOP-9789.
-

Resolution: Fixed

Will be fixed by HADOOP-9868.

 Support server advertised kerberos principals
 -

 Key: HADOOP-9789
 URL: https://issues.apache.org/jira/browse/HADOOP-9789
 Project: Hadoop Common
  Issue Type: New Feature
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: HADOOP-9789.2.patch, HADOOP-9789.patch, 
 HADOOP-9789.patch, hadoop-ojoshi-datanode-HW10351.local.log, 
 hadoop-ojoshi-namenode-HW10351.local.log


 The RPC client currently constructs the kerberos principal based on the a 
 config value, usually with an _HOST substitution.  This means the service 
 principal must match the hostname the client is using to connect.  This 
 causes problems:
 * Prevents using HA with IP failover when the servers have distinct 
 principals from the failover hostname
 * Prevents clients from being able to access a service bound to multiple 
 interfaces.  Only the interface that matches the server's principal may be 
 used.
 The client should be able to use the SASL advertised principal (HADOOP-9698), 
 with appropriate safeguards, to acquire the correct service ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9866) convert hadoop-auth testcases requiring kerberos to use minikdc

2013-08-13 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738232#comment-13738232
 ] 

Daryn Sharp commented on HADOOP-9866:
-

Also converting {{TestSaslRPC}} would be great but given the title perhaps it 
should be another jira.

How much time overhead does the minikdc add to test execution times?

 convert hadoop-auth testcases requiring kerberos to use minikdc
 ---

 Key: HADOOP-9866
 URL: https://issues.apache.org/jira/browse/HADOOP-9866
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Wei Yan



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9848) Create a MiniKDC for use with security testing

2013-08-13 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738235#comment-13738235
 ] 

Suresh Srinivas commented on HADOOP-9848:
-

[~tucu00] Please do not commit patches without addressing -1 from jenkins - 
https://issues.apache.org/jira/secure/EditComment!default.jspa?id=12662454commentId=13737505.

 Create a MiniKDC for use with security testing
 --

 Key: HADOOP-9848
 URL: https://issues.apache.org/jira/browse/HADOOP-9848
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security, test
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: HADOOP-9848.patch, HADOOP-9848.patch, HADOOP-9848.patch, 
 HADOOP-9848.patch, HADOOP-9848.patch


 Create a MiniKDC using Apache Directory Server. MiniKDC builds an embedded 
 KDC (key distribution center), and allows to create principals and keytabs on 
 the fly. MiniKDC can be integrated for Hadoop security unit testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9848) Create a MiniKDC for use with security testing

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738278#comment-13738278
 ] 

Hudson commented on HADOOP-9848:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1517 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1517/])
HADOOP-9848. Create a MiniKDC for use with security testing. (ywskycn via tucu) 
(tucu: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513308)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared/keytab
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared/keytab/HackedKeytab.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/directory/server/kerberos/shared/keytab/HackedKeytabEncoder.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/KerberosSecurityTestcase.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/java/org/apache/hadoop/minikdc/MiniKdc.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources/log4j.properties
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources/minikdc-krb5.conf
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/main/resources/minikdc.ldiff
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java
* /hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-minikdc/src/test/java/org/apache/hadoop/minikdc/TestMiniKdc.java
* /hadoop/common/trunk/hadoop-common-project/pom.xml
* /hadoop/common/trunk/hadoop-project/pom.xml


 Create a MiniKDC for use with security testing
 --

 Key: HADOOP-9848
 URL: https://issues.apache.org/jira/browse/HADOOP-9848
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security, test
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: HADOOP-9848.patch, HADOOP-9848.patch, HADOOP-9848.patch, 
 HADOOP-9848.patch, HADOOP-9848.patch


 Create a MiniKDC using Apache Directory Server. MiniKDC builds an embedded 
 KDC (key distribution center), and allows to create principals and keytabs on 
 the fly. MiniKDC can be integrated for Hadoop security unit testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9583) test-patch gives +1 despite build failure when running tests

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738280#comment-13738280
 ] 

Hudson commented on HADOOP-9583:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1517 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1517/])
HADOOP-9583. test-patch gives +1 despite build failure when running tests. 
Contributed by Jason Lowe. (kihwal: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513200)
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 test-patch gives +1 despite build failure when running tests
 

 Key: HADOOP-9583
 URL: https://issues.apache.org/jira/browse/HADOOP-9583
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical
 Fix For: 3.0.0

 Attachments: HADOOP-9583.another_dummy.patch, 
 HADOOP-9583-dummy.patch, HADOOP-9583-dummy.patch, 
 HADOOP-9583-dummy-without-changes.patch, HADOOP-9583.patch


 I've seen a couple of checkins recently where tests have timed out resulting 
 in a Maven build failure yet test-patch reports an overall +1 on the patch.  
 This is encouraging commits of patches that subsequently break builds.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9847) TestGlobPath symlink tests fail to cleanup properly

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738277#comment-13738277
 ] 

Hudson commented on HADOOP-9847:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1517 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1517/])
Fix CHANGES.txt for HADOOP-9847 (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513252)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 TestGlobPath symlink tests fail to cleanup properly
 ---

 Key: HADOOP-9847
 URL: https://issues.apache.org/jira/browse/HADOOP-9847
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 3.0.0, 2.3.0

 Attachments: HADOOP-9847.001.patch


 On our internal trunk Jenkins runs, I've seen failures like the following:
 {noformat}
 Error Message:
 Cannot delete /user/jenkins. Name node is in safe mode. Resources are low on 
 NN. Please add or free up more resources then turn off safe mode manually. 
 NOTE:  If you turn off safe mode before adding resources, the NN will 
 immediately return to safe mode. Use hdfs dfsadmin -safemode leave to turn 
 safe mode off.  at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3138)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3097)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3081)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:671)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:491)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48087)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2031)  at 
 org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2027)  at 
 java.security.AccessController.doPrivileged(Native Method)  at 
 javax.security.auth.Subject.doAs(Subject.java:396)  at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1493)
   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2025)
 Stack Trace:
 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
  Cannot delete /user/jenkins. Name node is in safe mode.
 Resources are low on NN. Please add or free up more resources then turn off 
 safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
 the NN will immediately return to safe mode. Use hdfs dfsadmin -safemode 
 leave to turn safe mode off.
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3138)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3097)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3081)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:671)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:491)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48087)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2031)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2027)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1493)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2025)
 at org.apache.hadoop.ipc.Client.call(Client.java:1399)
 at org.apache.hadoop.ipc.Client.call(Client.java:1352)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
 at $Proxy15.delete(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
 at 
 

[jira] [Commented] (HADOOP-9845) Update protobuf to 2.5 from 2.4.x

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738279#comment-13738279
 ] 

Hudson commented on HADOOP-9845:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1517 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1517/])
HADOOP-9845. Update protobuf to 2.5 from 2.4.x. (tucu) (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513281)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/pom.xml
* 
/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java
* /hadoop/common/trunk/hadoop-project/pom.xml
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml


 Update protobuf to 2.5 from 2.4.x
 -

 Key: HADOOP-9845
 URL: https://issues.apache.org/jira/browse/HADOOP-9845
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance
Affects Versions: 2.0.5-alpha
Reporter: stack
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.1.0-beta

 Attachments: HADOOP-9845.patch, HADOOP-9845.patch


 protobuf 2.5 is a bit faster with a new Parse to avoid a builder step and a 
 few other goodies that we'd like to take advantage of over in hbase 
 especially now we are all pb all the time.  Unfortunately the protoc 
 generated files are no longer compatible w/ 2.4.1 generated files.  Hadoop 
 uses 2.4.1 pb.  This latter fact makes it so we cannot upgrade until hadoop 
 does.
 This issue suggests hadoop2 move to protobuf 2.5.
 I can do the patch no prob. if there is interest.
 (When we upgraded our build broke with complaints like the below:
 {code}
 java.lang.UnsupportedOperationException: This is supposed to be overridden by 
 subclasses.
   at 
 com.google.protobuf.GeneratedMessage.getUnknownFields(GeneratedMessage.java:180)
   at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetDatanodeReportRequestProto.getSerializedSize(ClientNamenodeProtocolProtos.java:21566)
   at 
 com.google.protobuf.AbstractMessageLite.toByteString(AbstractMessageLite.java:49)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.constructRpcRequest(ProtobufRpcEngine.java:149)
   at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:193)
   at com.sun.proxy.$Proxy14.getDatanodeReport(Unknown Source)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
   at 
 org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
   at com.sun.proxy.$Proxy14.getDatanodeReport(Unknown Source)
   at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDatanodeReport(ClientNamenodeProtocolTranslatorPB.java:488)
   at org.apache.hadoop.hdfs.DFSClient.datanodeReport(DFSClient.java:1887)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:1798
 ...
 {code}
 More over in HBASE-8165 if interested.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738291#comment-13738291
 ] 

Hadoop QA commented on HADOOP-9868:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12597721/HADOOP-9868.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2974//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2974//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2974//console

This message is automatically generated.

 Server must not advertise kerberos realm
 

 Key: HADOOP-9868
 URL: https://issues.apache.org/jira/browse/HADOOP-9868
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9868.patch


 HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
 the kerberos service principal realm.  SASL clients and servers do not 
 support specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9381) Document dfs cp -f option

2013-08-13 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9381:


Attachment: HADOOP-9381.2.patch

Updated patch.

 Document dfs cp -f option
 -

 Key: HADOOP-9381
 URL: https://issues.apache.org/jira/browse/HADOOP-9381
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Keegan Witt
Assignee: Keegan Witt
Priority: Trivial
 Attachments: HADOOP-9381.1.patch, HADOOP-9381.2.patch, 
 HADOOP-9381.patch, HADOOP-9381.patch


 dfs cp should document -f (overwrite) option in the page displayed by -help. 
 Additionally, the HTML documentation page should also document this option 
 and all the options should all be formatted the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9848) Create a MiniKDC for use with security testing

2013-08-13 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738314#comment-13738314
 ] 

Alejandro Abdelnur commented on HADOOP-9848:


[~sureshms], [~szetszwo], missed test-patch before (thought the warning count 
was using trunk as base). I've just committed  a fix for test-patch in trunk 
and branch-2. Answering Nicholas, is not possible to use other class than Dn as 
that is what ApacheDS uses.

 Create a MiniKDC for use with security testing
 --

 Key: HADOOP-9848
 URL: https://issues.apache.org/jira/browse/HADOOP-9848
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security, test
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: HADOOP-9848.patch, HADOOP-9848.patch, HADOOP-9848.patch, 
 HADOOP-9848.patch, HADOOP-9848.patch


 Create a MiniKDC using Apache Directory Server. MiniKDC builds an embedded 
 KDC (key distribution center), and allows to create principals and keytabs on 
 the fly. MiniKDC can be integrated for Hadoop security unit testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9848) Create a MiniKDC for use with security testing

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738331#comment-13738331
 ] 

Hudson commented on HADOOP-9848:


SUCCESS: Integrated in Hadoop-trunk-Commit #4252 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4252/])
HADOOP-9848 Addendum fixing OK_JAVADOC_WARNINGS in test-patch (tucu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513527)
* /hadoop/common/trunk/dev-support/test-patch.sh


 Create a MiniKDC for use with security testing
 --

 Key: HADOOP-9848
 URL: https://issues.apache.org/jira/browse/HADOOP-9848
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security, test
Reporter: Wei Yan
Assignee: Wei Yan
 Fix For: 2.3.0

 Attachments: HADOOP-9848.patch, HADOOP-9848.patch, HADOOP-9848.patch, 
 HADOOP-9848.patch, HADOOP-9848.patch


 Create a MiniKDC using Apache Directory Server. MiniKDC builds an embedded 
 KDC (key distribution center), and allows to create principals and keytabs on 
 the fly. MiniKDC can be integrated for Hadoop security unit testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9866) convert hadoop-auth testcases requiring kerberos to use minikdc

2013-08-13 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738333#comment-13738333
 ] 

Alejandro Abdelnur commented on HADOOP-9866:


[~daryn], minikdc takes 3secs to start/stop. IMO We should convert testcases 
that exercise security components only. There are a few testscase through out 
the codebase that have a kerberos profile and are disabled by default 
(hadoop-auth  httpfs). I'm starting with those. Agree, doing the SaslRPC ones 
make sense as well, but I'd do them as different JIRAs. Do you want to take a 
stab to the SaslRPC, it should be quite simple (I've been using minikdc to test 
some thrift services doing both client/server stuff -all in the same process- 
and it works like a charm).

 convert hadoop-auth testcases requiring kerberos to use minikdc
 ---

 Key: HADOOP-9866
 URL: https://issues.apache.org/jira/browse/HADOOP-9866
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Wei Yan



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9381) Document dfs cp -f option

2013-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738351#comment-13738351
 ] 

Hadoop QA commented on HADOOP-9381:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12597730/HADOOP-9381.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2975//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2975//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2975//console

This message is automatically generated.

 Document dfs cp -f option
 -

 Key: HADOOP-9381
 URL: https://issues.apache.org/jira/browse/HADOOP-9381
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Keegan Witt
Assignee: Keegan Witt
Priority: Trivial
 Attachments: HADOOP-9381.1.patch, HADOOP-9381.2.patch, 
 HADOOP-9381.patch, HADOOP-9381.patch


 dfs cp should document -f (overwrite) option in the page displayed by -help. 
 Additionally, the HTML documentation page should also document this option 
 and all the options should all be formatted the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9868) Server must not advertise kerberos realm

2013-08-13 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738362#comment-13738362
 ] 

Alejandro Abdelnur commented on HADOOP-9868:


[~daryn], I'm a bit puzzled by this HADOOP-9789. While I understand the 
reasoning for it, doesn't that weaken security? An impersonator can publish an 
alternate principal for which it has a keytab for. 

 Server must not advertise kerberos realm
 

 Key: HADOOP-9868
 URL: https://issues.apache.org/jira/browse/HADOOP-9868
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-9868.patch


 HADOOP-9789 broke kerberos authentication by making the RPC server advertise 
 the kerberos service principal realm.  SASL clients and servers do not 
 support specifying a realm, so it must be removed from the advertisement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9863) Snappy compression in branch1 cannot work on Big-Endian 64 bit platform w/o backporting HADOOP-8686

2013-08-13 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738399#comment-13738399
 ] 

Yu Li commented on HADOOP-9863:
---

Hi Tian Hong,

This is a partial backport of HADOOP-8686, which is already in trunk/branch2 
but missing in branch1. 

 Snappy compression in branch1 cannot work on Big-Endian 64 bit platform w/o 
 backporting HADOOP-8686
 ---

 Key: HADOOP-9863
 URL: https://issues.apache.org/jira/browse/HADOOP-9863
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 1.1.1, 1.1.2, 1.2.1
Reporter: Yu Li
Assignee: Yu Li
  Labels: native, ppc64, snappy
 Attachments: HADOOP-9863.patch


 w/o changes made in HADOOP-8686 on SnappyCompressor.c, snappy compression in 
 branch-1 hadoop on big endian 64 bit platform (ppc64 for example) will 
 generate incorrect .snappy (almost-empty) file because of type casting from 
 size_t to jint. Will include more detailed analysis in comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9869) Configuration.getSocketAddr() should use getTrimmed()

2013-08-13 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-9869:
--

 Summary:  Configuration.getSocketAddr() should use getTrimmed()
 Key: HADOOP-9869
 URL: https://issues.apache.org/jira/browse/HADOOP-9869
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0, 2.1.0-beta, 1.3.0
Reporter: Steve Loughran
Priority: Minor


YARN-1059 has shown that the hostname:port string used for the address of 
things like the RM isn't trimmed before its parsed, leading to errors that 
aren't that obvious. 

We should trim it -it's clearly not going to break any existing (valid) 
configurations

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9865) FileContext.globStatus() has a regression with respect to relative path

2013-08-13 Thread Chuan Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuan Liu updated HADOOP-9865:
--

Affects Version/s: 2.3.0
   3.0.0

 FileContext.globStatus() has a regression with respect to relative path
 ---

 Key: HADOOP-9865
 URL: https://issues.apache.org/jira/browse/HADOOP-9865
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-9865-demo.patch


 I discovered the problem when running unit test TestMRJobClient on Windows. 
 The cause is indirect in this case. In the unit test, we try to launch a job 
 and list its status. The job failed, and caused the list command get a result 
 of 0, which triggered the unit test assert. From the log and debug, the job 
 failed because we failed to create the Jar with classpath (see code around 
 {{FileUtil.createJarWithClassPath}}) in {{ContainerLaunch}}. This is a 
 Windows specific step right now; so the test still passes on Linux. This step 
 failed because we passed in a relative path to {{FileContext.globStatus()}} 
 in {{FileUtil.createJarWithClassPath}}. The relevant log looks like the 
 following.
 {noformat}
 2013-08-12 16:12:05,937 WARN  [ContainersLauncher #0] 
 launcher.ContainerLaunch (ContainerLaunch.java:call(270)) - Failed to launch 
 container.
 org.apache.hadoop.HadoopIllegalArgumentException: Path is relative
   at org.apache.hadoop.fs.Path.checkNotRelative(Path.java:74)
   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:304)
   at org.apache.hadoop.fs.Globber.schemeFromPath(Globber.java:107)
   at org.apache.hadoop.fs.Globber.glob(Globber.java:128)
   at 
 org.apache.hadoop.fs.FileContext$Util.globStatus(FileContext.java:1908)
   at 
 org.apache.hadoop.fs.FileUtil.createJarWithClassPath(FileUtil.java:1247)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.sanitizeEnv(ContainerLaunch.java:679)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:232)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:1)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 {noformat}
 I think this is a regression from HADOOP-9817. I modified some code and the 
 unit test passed. (See the attached patch.) However, I think the impact is 
 larger. I will add some unit tests to verify the behavior, and work on a more 
 complete fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9346) Upgrading to protoc 2.5.0 fails the build

2013-08-13 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738585#comment-13738585
 ] 

Ravi Prakash commented on HADOOP-9346:
--

Can we close this JIRA now? Now that HADOOP-9845 is in trunk?

 Upgrading to protoc 2.5.0 fails the build
 -

 Key: HADOOP-9346
 URL: https://issues.apache.org/jira/browse/HADOOP-9346
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
  Labels: protobuf
 Attachments: HADOOP-9346.patch


 Reported over the impala lists, one of the errors received is:
 {code}
 src/hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ha/proto/ZKFCProtocolProtos.java:[104,37]
  can not find symbol.
 symbol: class Parser
 location: package com.google.protobuf
 {code}
 Worth looking into as we'll eventually someday bump our protobuf deps.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9865) FileContext.globStatus() has a regression with respect to relative path

2013-08-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738688#comment-13738688
 ] 

Colin Patrick McCabe commented on HADOOP-9865:
--

{code}
+String scheme = schemeFromPath(fixRelativePart(pathPattern));
+String authority = authorityFromPath(fixRelativePart(pathPattern));
{code}
This is a good start, but the problem is that pathPattern is not actually a 
path-- it's a pattern.  So it may be something like {/,a}/foo, which you can't 
really make into an absolute path in a sensible way.

I think the right fix is something like this:
{code}
diff --git 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
index ad28478..378311a 100644
--- 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
+++ 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
@@ -99,24 +99,24 @@ private Path fixRelativePart(Path path) {
   }
 
   private String schemeFromPath(Path path) throws IOException {
-String scheme = pathPattern.toUri().getScheme();
+String scheme = path.toUri().getScheme();
 if (scheme == null) {
   if (fs != null) {
 scheme = fs.getUri().getScheme();
   } else {
-scheme = fc.getFSofPath(path).getUri().getScheme();
+scheme = fc.getDefaultFileSystem().getUri().getScheme();
   }
 }
 return scheme;
   }
 
   private String authorityFromPath(Path path) throws IOException {
-String authority = pathPattern.toUri().getAuthority();
+String authority = path.toUri().getAuthority();
 if (authority == null) {
   if (fs != null) {
 authority = fs.getUri().getAuthority();
   } else {
-authority = fc.getFSofPath(path).getUri().getAuthority();
+authority = fc.getDefaultFileSystem().getUri().getAuthority();
   }
 }
 return authority ;
{code}

This probably needs more testing, including unit tests...

 FileContext.globStatus() has a regression with respect to relative path
 ---

 Key: HADOOP-9865
 URL: https://issues.apache.org/jira/browse/HADOOP-9865
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-9865-demo.patch


 I discovered the problem when running unit test TestMRJobClient on Windows. 
 The cause is indirect in this case. In the unit test, we try to launch a job 
 and list its status. The job failed, and caused the list command get a result 
 of 0, which triggered the unit test assert. From the log and debug, the job 
 failed because we failed to create the Jar with classpath (see code around 
 {{FileUtil.createJarWithClassPath}}) in {{ContainerLaunch}}. This is a 
 Windows specific step right now; so the test still passes on Linux. This step 
 failed because we passed in a relative path to {{FileContext.globStatus()}} 
 in {{FileUtil.createJarWithClassPath}}. The relevant log looks like the 
 following.
 {noformat}
 2013-08-12 16:12:05,937 WARN  [ContainersLauncher #0] 
 launcher.ContainerLaunch (ContainerLaunch.java:call(270)) - Failed to launch 
 container.
 org.apache.hadoop.HadoopIllegalArgumentException: Path is relative
   at org.apache.hadoop.fs.Path.checkNotRelative(Path.java:74)
   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:304)
   at org.apache.hadoop.fs.Globber.schemeFromPath(Globber.java:107)
   at org.apache.hadoop.fs.Globber.glob(Globber.java:128)
   at 
 org.apache.hadoop.fs.FileContext$Util.globStatus(FileContext.java:1908)
   at 
 org.apache.hadoop.fs.FileUtil.createJarWithClassPath(FileUtil.java:1247)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.sanitizeEnv(ContainerLaunch.java:679)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:232)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:1)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 {noformat}
 I think this is a regression from HADOOP-9817. I modified some code and the 
 unit test passed. (See the attached patch.) However, I think the impact is 
 larger. I will add some unit tests to verify the behavior, and 

[jira] [Commented] (HADOOP-9446) Support Kerberos HTTP SPNEGO authentication for non-SUN JDK

2013-08-13 Thread Yu Gao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738744#comment-13738744
 ] 

Yu Gao commented on HADOOP-9446:


Regarding the latest patch test report,
(1) there are 13 java doc warnings in total. Besides the known 11 warnings 
introduced by sun APIs, which are expected, there are 2 new ones from 
hadoop-minikdc module:
[WARNING] Javadoc Warnings
[WARNING] 
org/apache/directory/api/ldap/model/name/Dn.class(org/apache/directory/api/ldap/model/name:Dn.class):
 warning: Cannot find annotation method 'value()' in type 
'edu.umd.cs.findbugs.annotations.SuppressWarnings': class file for 
edu.umd.cs.findbugs.annotations.SuppressWarnings not found
[WARNING] 
org/apache/directory/api/ldap/model/name/Dn.class(org/apache/directory/api/ldap/model/name:Dn.class):
 warning: Cannot find annotation method 'justification()' in type 
'edu.umd.cs.findbugs.annotations.SuppressWarnings'

which was introduced by HADOOP-9848

(2) for the two findbugs issue, they are caused by two fields from class 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem, which is also not 
introduced or modified by this patch. Not sure why it popped up here...


 Support Kerberos HTTP SPNEGO authentication for non-SUN JDK
 ---

 Key: HADOOP-9446
 URL: https://issues.apache.org/jira/browse/HADOOP-9446
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Yu Gao
Assignee: Yu Gao
 Attachments: HADOOP-9446-branch-2.patch, 
 HADOOP-9446-branch-2-v2.patch, HADOOP-9446.patch, HADOOP-9446-v2.patch, 
 TestKerberosHttpSPNEGO.java, 
 TEST-org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.xml,
  
 TEST-org.apache.hadoop.security.authentication.server.TestKerberosAuthenticationHandler.xml


 Class KerberosAuthenticator and KerberosAuthenticationHandler currently only 
 support running with SUN JDK when Kerberos is enabled. In order to support  
 alternative JDKs like IBM JDK which has different options supported by 
 Krb5LoginModule and different login module classes, the HTTP Kerberos 
 authentication classes need to be changed.
 In addition, NT_GSS_KRB5_PRINCIPAL, which is used in KerberosAuthenticator to 
 get the corresponding oid instance, is a field defined in SUN JDK, but not in 
 IBM JDK.
 This JIRA is to fix the existing problems and add support for Kerberos HTTP 
 SPNEGO authentication with non-SUN JDK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-08-13 Thread Wei Yan (JIRA)
Wei Yan created HADOOP-9870:
---

 Summary: Mixed configurations for JVM -Xmx in hadoop command
 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan


When we use hadoop command to launch a class, there are two places setting the 
-Xmx configuration.

*1*. The first place is located in file 
{{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
{code}
exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
{code}
Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
default value is -Xmx1000m.

*2*. The second place is set with $HADOOP_OPTS in file 
{{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
{code}
HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
{code}
Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
{code}
export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
{code}

Currently the final default java command looks like:
{code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}

And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
three -Xmx configurations. 

The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that users 
should not make any change in hadoop-config.sh.

We should let hadoop smart to choose the right one before launching the java 
command, instead of leaving for jvm to make the decision.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9871) Fix intermittent findbug warnings in DefaultMetricsSystem

2013-08-13 Thread Luke Lu (JIRA)
Luke Lu created HADOOP-9871:
---

 Summary: Fix intermittent findbug warnings in DefaultMetricsSystem
 Key: HADOOP-9871
 URL: https://issues.apache.org/jira/browse/HADOOP-9871
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Luke Lu
Assignee: Junping Du
Priority: Minor


Findbugs sometimes (not always) picks up warnings from DefaultMetricsSystem due 
to some of the fields not being transient for serializable class 
(DefaultMetricsSystem is an Enum (which is serializable)). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible

2013-08-13 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738771#comment-13738771
 ] 

Sergey Shelukhin commented on HADOOP-9487:
--

ping?

 Deprecation warnings in Configuration should go to their own log or otherwise 
 be suppressible
 -

 Key: HADOOP-9487
 URL: https://issues.apache.org/jira/browse/HADOOP-9487
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 3.0.0
Reporter: Steve Loughran
 Attachments: HADOOP-9487.patch, HADOOP-9487.patch


 Running local pig jobs triggers large quantities of warnings about deprecated 
 properties -something I don't care about as I'm not in a position to fix 
 without delving into Pig. 
 I can suppress them by changing the log level, but that can hide other 
 warnings that may actually matter.
 If there was a special Configuration.deprecated log for all deprecation 
 messages, this log could be suppressed by people who don't want noisy logs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9865) FileContext.globStatus() has a regression with respect to relative path

2013-08-13 Thread Chuan Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738807#comment-13738807
 ] 

Chuan Liu commented on HADOOP-9865:
---

Thanks for the suggestion, Colin! I was also thinking something similar. The 
patch was just to demo the problem in the description. I will add some unit 
tests to cover the scenarios as well!

 FileContext.globStatus() has a regression with respect to relative path
 ---

 Key: HADOOP-9865
 URL: https://issues.apache.org/jira/browse/HADOOP-9865
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Chuan Liu
Assignee: Chuan Liu
 Attachments: HADOOP-9865-demo.patch


 I discovered the problem when running unit test TestMRJobClient on Windows. 
 The cause is indirect in this case. In the unit test, we try to launch a job 
 and list its status. The job failed, and caused the list command get a result 
 of 0, which triggered the unit test assert. From the log and debug, the job 
 failed because we failed to create the Jar with classpath (see code around 
 {{FileUtil.createJarWithClassPath}}) in {{ContainerLaunch}}. This is a 
 Windows specific step right now; so the test still passes on Linux. This step 
 failed because we passed in a relative path to {{FileContext.globStatus()}} 
 in {{FileUtil.createJarWithClassPath}}. The relevant log looks like the 
 following.
 {noformat}
 2013-08-12 16:12:05,937 WARN  [ContainersLauncher #0] 
 launcher.ContainerLaunch (ContainerLaunch.java:call(270)) - Failed to launch 
 container.
 org.apache.hadoop.HadoopIllegalArgumentException: Path is relative
   at org.apache.hadoop.fs.Path.checkNotRelative(Path.java:74)
   at org.apache.hadoop.fs.FileContext.getFSofPath(FileContext.java:304)
   at org.apache.hadoop.fs.Globber.schemeFromPath(Globber.java:107)
   at org.apache.hadoop.fs.Globber.glob(Globber.java:128)
   at 
 org.apache.hadoop.fs.FileContext$Util.globStatus(FileContext.java:1908)
   at 
 org.apache.hadoop.fs.FileUtil.createJarWithClassPath(FileUtil.java:1247)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.sanitizeEnv(ContainerLaunch.java:679)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:232)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:1)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 {noformat}
 I think this is a regression from HADOOP-9817. I modified some code and the 
 unit test passed. (See the attached patch.) However, I think the impact is 
 larger. I will add some unit tests to verify the behavior, and work on a more 
 complete fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9446) Support Kerberos HTTP SPNEGO authentication for non-SUN JDK

2013-08-13 Thread Luke Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738848#comment-13738848
 ] 

Luke Lu commented on HADOOP-9446:
-

The v2 patch lgtm. +1. Filed HADOOP-9871 to address #2. Will commit shortly.

 Support Kerberos HTTP SPNEGO authentication for non-SUN JDK
 ---

 Key: HADOOP-9446
 URL: https://issues.apache.org/jira/browse/HADOOP-9446
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Yu Gao
Assignee: Yu Gao
 Attachments: HADOOP-9446-branch-2.patch, 
 HADOOP-9446-branch-2-v2.patch, HADOOP-9446.patch, HADOOP-9446-v2.patch, 
 TestKerberosHttpSPNEGO.java, 
 TEST-org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.xml,
  
 TEST-org.apache.hadoop.security.authentication.server.TestKerberosAuthenticationHandler.xml


 Class KerberosAuthenticator and KerberosAuthenticationHandler currently only 
 support running with SUN JDK when Kerberos is enabled. In order to support  
 alternative JDKs like IBM JDK which has different options supported by 
 Krb5LoginModule and different login module classes, the HTTP Kerberos 
 authentication classes need to be changed.
 In addition, NT_GSS_KRB5_PRINCIPAL, which is used in KerberosAuthenticator to 
 get the corresponding oid instance, is a field defined in SUN JDK, but not in 
 IBM JDK.
 This JIRA is to fix the existing problems and add support for Kerberos HTTP 
 SPNEGO authentication with non-SUN JDK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-08-13 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738972#comment-13738972
 ] 

Kai Zheng commented on HADOOP-9870:
---

Similar issue was also found for HDFS related services, ref. HDFS-5087.

It looks like we have two approaches to set JAVA heap max, by configuring 
JAVA_HEAP_MAX or adding -Xmx option directly in related *_OPTS. We might avoid 
using the both and the conflict.

Also looked at other components like YARN. To be consistent, we could fix this 
by I would suggest:
1. Add variable HADOOP_CLIENT_HEAPSIZE, and user can set it either in 
hadoop-env.sh or hadoop-config.sh;
2. In appropriate script, check HADOOP_CLIENT_HEAPSIZE and set the value to 
JAVA_HEAP_MAX if any.


 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan

 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-08-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9652:


Attachment: (was: hadoop-9652-6.patch)

 RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
 -

 Key: HADOOP-9652
 URL: https://issues.apache.org/jira/browse/HADOOP-9652
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Fix For: 2.3.0

 Attachments: hadoop-9452-1.patch, hadoop-9652-2.patch, 
 hadoop-9652-3.patch, hadoop-9652-4.patch, hadoop-9652-5.patch, 
 hadoop-9652-6.patch


 {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
 the symlink, but instead uses the owner and mode of the symlink target.  If 
 the target can't be found, it fills in bogus values (the empty string and 
 FsPermission.getDefault) for these.
 Symlinks have an owner distinct from the owner of the target they point to, 
 and getFileLinkStatus ought to expose this.
 In some operating systems, symlinks can have a permission other than 0777.  
 We ought to expose this in RawLocalFilesystem and other places, although we 
 don't necessarily have to support this behavior in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-08-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9652:


Attachment: hadoop-9652-6.patch

 RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
 -

 Key: HADOOP-9652
 URL: https://issues.apache.org/jira/browse/HADOOP-9652
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Fix For: 2.3.0

 Attachments: hadoop-9452-1.patch, hadoop-9652-2.patch, 
 hadoop-9652-3.patch, hadoop-9652-4.patch, hadoop-9652-5.patch, 
 hadoop-9652-6.patch


 {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
 the symlink, but instead uses the owner and mode of the symlink target.  If 
 the target can't be found, it fills in bogus values (the empty string and 
 FsPermission.getDefault) for these.
 Symlinks have an owner distinct from the owner of the target they point to, 
 and getFileLinkStatus ought to expose this.
 In some operating systems, symlinks can have a permission other than 0777.  
 We ought to expose this in RawLocalFilesystem and other places, although we 
 don't necessarily have to support this behavior in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9740) FsShell's Text command does not read avro data files stored on HDFS

2013-08-13 Thread Doug Cutting (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doug Cutting updated HADOOP-9740:
-

Resolution: Fixed
  Assignee: Allan Yan
Status: Resolved  (was: Patch Available)

I committed this.  Thanks, Allan.

 FsShell's Text command does not read avro data files stored on HDFS
 ---

 Key: HADOOP-9740
 URL: https://issues.apache.org/jira/browse/HADOOP-9740
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.5-alpha
Reporter: Allan Yan
Assignee: Allan Yan
  Labels: patch
 Attachments: HADOOP-9740.patch, HADOOP-9740.patch, 
 maven_unit_test_error.log


 HADOOP-8597 added support for reading avro data files from FsShell Text 
 command. However, it does not work with files stored on HDFS. Here is the 
 error message:
 {code}
 $hadoop fs -text hdfs://localhost:8020/test.avro
 -text: URI scheme is not file
 Usage: hadoop fs [generic options] -text [-ignoreCrc] src ...
 {code}
 The problem is because the File constructor complains not able to recognize 
 hdfs:// scheme in during AvroFileInputStream initialization. 
 There is a unit TestTextCommand.java under hadoop-common project. However it 
 only tested files in local file system. I created a similar one under 
 hadoop-hdfs project using MiniDFSCluster. Please see attached maven unit test 
 error message with full stack trace for more details.
  
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9870) Mixed configurations for JVM -Xmx in hadoop command

2013-08-13 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738993#comment-13738993
 ] 

Wei Yan commented on HADOOP-9870:
-

[~drankye] I agree with your approach. 
Just one concern: $HADOOP_CLIENT_OPTS tries to wrap all configurations from 
user-side and pass to hadoop command. If we introduce another variable 
$HADOOP_CLIENT_HEAPSIZE, it may complex the original mechanism.

Another possible approach may be:
1. In {{hadoop-env.sh}}, before {{export HADOOP_CLIENT_OPTS=-Xmx512m 
$HADOOP_CLIENT_OPTS}}, we can check whether $HADOOP_CLIENT_OPTS contains -Xmx. 
If has, ignore the default -Xmx512; otherwise, take the -Xmx512m.
2. In {{hadoop}}, before {{exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS 
$@}}, we can check whether $HADOOP_OPTS containers -Xmx. If has, don't 
include $JAVA_HEAP_MAX in the command; otherwise, take the $JAVA_HEAP_MAX.

Let's wait for some comments from other guys.


 Mixed configurations for JVM -Xmx in hadoop command
 ---

 Key: HADOOP-9870
 URL: https://issues.apache.org/jira/browse/HADOOP-9870
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan

 When we use hadoop command to launch a class, there are two places setting 
 the -Xmx configuration.
 *1*. The first place is located in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 exec $JAVA $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS $@
 {code}
 Here $JAVA_HEAP_MAX is configured in hadoop-config.sh 
 ({{hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh}}). The 
 default value is -Xmx1000m.
 *2*. The second place is set with $HADOOP_OPTS in file 
 {{hadoop-common-project/hadoop-common/src/main/bin/hadoop}}.
 {code}
 HADOOP_OPTS=$HADOOP_OPTS $HADOOP_CLIENT_OPTS
 {code}
 Here $HADOOP_CLIENT_OPTS is set in hadoop-env.sh 
 ({{hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh}})
 {code}
 export HADOOP_CLIENT_OPTS=-Xmx512m $HADOOP_CLIENT_OPTS
 {code}
 Currently the final default java command looks like:
 {code}java -Xmx1000m  -Xmx512m CLASS_NAME ARGUMENTS{code}
 And if users also specify the -Xmx in the $HADOOP_CLIENT_OPTS, there will be 
 three -Xmx configurations. 
 The hadoop setup tutorial only discusses hadoop-env.sh, and it looks that 
 users should not make any change in hadoop-config.sh.
 We should let hadoop smart to choose the right one before launching the java 
 command, instead of leaving for jvm to make the decision.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9446) Support Kerberos HTTP SPNEGO authentication for non-SUN JDK

2013-08-13 Thread Luke Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Lu updated HADOOP-9446:


   Resolution: Fixed
Fix Version/s: 2.1.1-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2, 2.1-beta. Thanks Yu for the patch!

 Support Kerberos HTTP SPNEGO authentication for non-SUN JDK
 ---

 Key: HADOOP-9446
 URL: https://issues.apache.org/jira/browse/HADOOP-9446
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Yu Gao
Assignee: Yu Gao
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9446-branch-2.patch, 
 HADOOP-9446-branch-2-v2.patch, HADOOP-9446.patch, HADOOP-9446-v2.patch, 
 TestKerberosHttpSPNEGO.java, 
 TEST-org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.xml,
  
 TEST-org.apache.hadoop.security.authentication.server.TestKerberosAuthenticationHandler.xml


 Class KerberosAuthenticator and KerberosAuthenticationHandler currently only 
 support running with SUN JDK when Kerberos is enabled. In order to support  
 alternative JDKs like IBM JDK which has different options supported by 
 Krb5LoginModule and different login module classes, the HTTP Kerberos 
 authentication classes need to be changed.
 In addition, NT_GSS_KRB5_PRINCIPAL, which is used in KerberosAuthenticator to 
 get the corresponding oid instance, is a field defined in SUN JDK, but not in 
 IBM JDK.
 This JIRA is to fix the existing problems and add support for Kerberos HTTP 
 SPNEGO authentication with non-SUN JDK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9740) FsShell's Text command does not read avro data files stored on HDFS

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13738998#comment-13738998
 ] 

Hudson commented on HADOOP-9740:


SUCCESS: Integrated in Hadoop-trunk-Commit #4253 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4253/])
HADOOP-9740. Fix FsShell '-text' command to be able to read Avro files stored 
in HDFS.  Contributed by Allan Yan. (cutting: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513684)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/shell
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/shell/TestTextCommand.java


 FsShell's Text command does not read avro data files stored on HDFS
 ---

 Key: HADOOP-9740
 URL: https://issues.apache.org/jira/browse/HADOOP-9740
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.5-alpha
Reporter: Allan Yan
Assignee: Allan Yan
  Labels: patch
 Attachments: HADOOP-9740.patch, HADOOP-9740.patch, 
 maven_unit_test_error.log


 HADOOP-8597 added support for reading avro data files from FsShell Text 
 command. However, it does not work with files stored on HDFS. Here is the 
 error message:
 {code}
 $hadoop fs -text hdfs://localhost:8020/test.avro
 -text: URI scheme is not file
 Usage: hadoop fs [generic options] -text [-ignoreCrc] src ...
 {code}
 The problem is because the File constructor complains not able to recognize 
 hdfs:// scheme in during AvroFileInputStream initialization. 
 There is a unit TestTextCommand.java under hadoop-common project. However it 
 only tested files in local file system. I created a similar one under 
 hadoop-hdfs project using MiniDFSCluster. Please see attached maven unit test 
 error message with full stack trace for more details.
  
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9789) Support server advertised kerberos principals

2013-08-13 Thread Omkar Vinit Joshi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13739009#comment-13739009
 ] 

Omkar Vinit Joshi commented on HADOOP-9789:
---

[~daryn] sorry didn't get time yesterday to check the latest patch ..I will try 
on my local secured cluster and let you know..Thanks for fixing it.

 Support server advertised kerberos principals
 -

 Key: HADOOP-9789
 URL: https://issues.apache.org/jira/browse/HADOOP-9789
 Project: Hadoop Common
  Issue Type: New Feature
  Components: ipc, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 3.0.0, 2.1.1-beta

 Attachments: HADOOP-9789.2.patch, HADOOP-9789.patch, 
 HADOOP-9789.patch, hadoop-ojoshi-datanode-HW10351.local.log, 
 hadoop-ojoshi-namenode-HW10351.local.log


 The RPC client currently constructs the kerberos principal based on the a 
 config value, usually with an _HOST substitution.  This means the service 
 principal must match the hostname the client is using to connect.  This 
 causes problems:
 * Prevents using HA with IP failover when the servers have distinct 
 principals from the failover hostname
 * Prevents clients from being able to access a service bound to multiple 
 interfaces.  Only the interface that matches the server's principal may be 
 used.
 The client should be able to use the SASL advertised principal (HADOOP-9698), 
 with appropriate safeguards, to acquire the correct service ticket.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9446) Support Kerberos HTTP SPNEGO authentication for non-SUN JDK

2013-08-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13739014#comment-13739014
 ] 

Hudson commented on HADOOP-9446:


SUCCESS: Integrated in Hadoop-trunk-Commit #4254 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4254/])
HADOOP-9446. Support Kerberos SPNEGO for IBM JDK. (Yu Gao via llu) (llu: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1513687)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/KerberosAuthenticationHandler.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/util
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/util/PlatformName.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/PlatformName.java


 Support Kerberos HTTP SPNEGO authentication for non-SUN JDK
 ---

 Key: HADOOP-9446
 URL: https://issues.apache.org/jira/browse/HADOOP-9446
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 1.1.1, 2.0.2-alpha
Reporter: Yu Gao
Assignee: Yu Gao
 Fix For: 2.1.1-beta

 Attachments: HADOOP-9446-branch-2.patch, 
 HADOOP-9446-branch-2-v2.patch, HADOOP-9446.patch, HADOOP-9446-v2.patch, 
 TestKerberosHttpSPNEGO.java, 
 TEST-org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.xml,
  
 TEST-org.apache.hadoop.security.authentication.server.TestKerberosAuthenticationHandler.xml


 Class KerberosAuthenticator and KerberosAuthenticationHandler currently only 
 support running with SUN JDK when Kerberos is enabled. In order to support  
 alternative JDKs like IBM JDK which has different options supported by 
 Krb5LoginModule and different login module classes, the HTTP Kerberos 
 authentication classes need to be changed.
 In addition, NT_GSS_KRB5_PRINCIPAL, which is used in KerberosAuthenticator to 
 get the corresponding oid instance, is a field defined in SUN JDK, but not in 
 IBM JDK.
 This JIRA is to fix the existing problems and add support for Kerberos HTTP 
 SPNEGO authentication with non-SUN JDK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13739018#comment-13739018
 ] 

Hadoop QA commented on HADOOP-9652:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12597537/hadoop-9652-6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2977//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2977//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2977//console

This message is automatically generated.

 RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
 -

 Key: HADOOP-9652
 URL: https://issues.apache.org/jira/browse/HADOOP-9652
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Fix For: 2.3.0

 Attachments: hadoop-9452-1.patch, hadoop-9652-2.patch, 
 hadoop-9652-3.patch, hadoop-9652-4.patch, hadoop-9652-5.patch, 
 hadoop-9652-6.patch


 {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
 the symlink, but instead uses the owner and mode of the symlink target.  If 
 the target can't be found, it fills in bogus values (the empty string and 
 FsPermission.getDefault) for these.
 Symlinks have an owner distinct from the owner of the target they point to, 
 and getFileLinkStatus ought to expose this.
 In some operating systems, symlinks can have a permission other than 0777.  
 We ought to expose this in RawLocalFilesystem and other places, although we 
 don't necessarily have to support this behavior in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9871) Fix intermittent findbug warnings in DefaultMetricsSystem

2013-08-13 Thread Luke Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Lu updated HADOOP-9871:


Description: Findbugs sometimes picks up warnings from DefaultMetricsSystem 
due to some of the fields not being transient for serializable class 
(DefaultMetricsSystem is an Enum (which is serializable)).   (was: Findbugs 
sometimes (not always) picks up warnings from DefaultMetricsSystem due to some 
of the fields not being transient for serializable class (DefaultMetricsSystem 
is an Enum (which is serializable)). )

 Fix intermittent findbug warnings in DefaultMetricsSystem
 -

 Key: HADOOP-9871
 URL: https://issues.apache.org/jira/browse/HADOOP-9871
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Luke Lu
Assignee: Junping Du
Priority: Minor

 Findbugs sometimes picks up warnings from DefaultMetricsSystem due to some of 
 the fields not being transient for serializable class (DefaultMetricsSystem 
 is an Enum (which is serializable)). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13739031#comment-13739031
 ] 

Hadoop QA commented on HADOOP-9652:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12597844/hadoop-9652-6.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2978//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2978//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2978//console

This message is automatically generated.

 RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
 -

 Key: HADOOP-9652
 URL: https://issues.apache.org/jira/browse/HADOOP-9652
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Fix For: 2.3.0

 Attachments: hadoop-9452-1.patch, hadoop-9652-2.patch, 
 hadoop-9652-3.patch, hadoop-9652-4.patch, hadoop-9652-5.patch, 
 hadoop-9652-6.patch


 {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
 the symlink, but instead uses the owner and mode of the symlink target.  If 
 the target can't be found, it fills in bogus values (the empty string and 
 FsPermission.getDefault) for these.
 Symlinks have an owner distinct from the owner of the target they point to, 
 and getFileLinkStatus ought to expose this.
 In some operating systems, symlinks can have a permission other than 0777.  
 We ought to expose this in RawLocalFilesystem and other places, although we 
 don't necessarily have to support this behavior in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2013-08-13 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13739038#comment-13739038
 ] 

Colin Patrick McCabe commented on HADOOP-9652:
--

+1.  will commit shortly

 RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
 -

 Key: HADOOP-9652
 URL: https://issues.apache.org/jira/browse/HADOOP-9652
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Fix For: 2.3.0

 Attachments: hadoop-9452-1.patch, hadoop-9652-2.patch, 
 hadoop-9652-3.patch, hadoop-9652-4.patch, hadoop-9652-5.patch, 
 hadoop-9652-6.patch


 {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
 the symlink, but instead uses the owner and mode of the symlink target.  If 
 the target can't be found, it fills in bogus values (the empty string and 
 FsPermission.getDefault) for these.
 Symlinks have an owner distinct from the owner of the target they point to, 
 and getFileLinkStatus ought to expose this.
 In some operating systems, symlinks can have a permission other than 0777.  
 We ought to expose this in RawLocalFilesystem and other places, although we 
 don't necessarily have to support this behavior in HDFS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9346) Upgrading to protoc 2.5.0 fails the build

2013-08-13 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-9346.
-

Resolution: Duplicate

Thanks for pinging Ravi. I'd discussed with Alejandro that this could be 
closed. Looks like we added a dupe link but failed to close. Closing now.

 Upgrading to protoc 2.5.0 fails the build
 -

 Key: HADOOP-9346
 URL: https://issues.apache.org/jira/browse/HADOOP-9346
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
  Labels: protobuf
 Attachments: HADOOP-9346.patch


 Reported over the impala lists, one of the errors received is:
 {code}
 src/hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ha/proto/ZKFCProtocolProtos.java:[104,37]
  can not find symbol.
 symbol: class Parser
 location: package com.google.protobuf
 {code}
 Worth looking into as we'll eventually someday bump our protobuf deps.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9871) Fix intermittent findbug warnings in DefaultMetricsSystem

2013-08-13 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13739208#comment-13739208
 ] 

Junping Du commented on HADOOP-9871:


I didn't see the Findbugs complain. However, I see some fields like UniqueNames 
is non-serializable and not marked as transient. So I guess there are warnings 
of Non-transient non-serializable instance field in serializable class. Will 
deliver a quick patch to fix it.

 Fix intermittent findbug warnings in DefaultMetricsSystem
 -

 Key: HADOOP-9871
 URL: https://issues.apache.org/jira/browse/HADOOP-9871
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Luke Lu
Assignee: Junping Du
Priority: Minor

 Findbugs sometimes picks up warnings from DefaultMetricsSystem due to some of 
 the fields not being transient for serializable class (DefaultMetricsSystem 
 is an Enum (which is serializable)). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9871) Fix intermittent findbug warnings in DefaultMetricsSystem

2013-08-13 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-9871:
---

Attachment: HADOOP-9871.patch

 Fix intermittent findbug warnings in DefaultMetricsSystem
 -

 Key: HADOOP-9871
 URL: https://issues.apache.org/jira/browse/HADOOP-9871
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Luke Lu
Assignee: Junping Du
Priority: Minor
 Attachments: HADOOP-9871.patch


 Findbugs sometimes picks up warnings from DefaultMetricsSystem due to some of 
 the fields not being transient for serializable class (DefaultMetricsSystem 
 is an Enum (which is serializable)). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9871) Fix intermittent findbug warnings in DefaultMetricsSystem

2013-08-13 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-9871:
---

Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

 Fix intermittent findbug warnings in DefaultMetricsSystem
 -

 Key: HADOOP-9871
 URL: https://issues.apache.org/jira/browse/HADOOP-9871
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Luke Lu
Assignee: Junping Du
Priority: Minor
 Attachments: HADOOP-9871.patch


 Findbugs sometimes picks up warnings from DefaultMetricsSystem due to some of 
 the fields not being transient for serializable class (DefaultMetricsSystem 
 is an Enum (which is serializable)). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7682) taskTracker could not start because Failed to set permissions to ttprivate to 0700

2013-08-13 Thread Magic Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13739234#comment-13739234
 ] 

Magic Xie commented on HADOOP-7682:
---

Hi all, this bug seems fixed.
Please see 
http://svn.apache.org/viewvc/hadoop/common/tags/release-2.1.0-beta-rc1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java?revision=1065901view=markup
and HADOOP-7126. 
Fix file permission setting for RawLocalFileSystem on Windows. Contributed by 
Po Cheung.

 taskTracker could not start because Failed to set permissions to ttprivate 
 to 0700
 --

 Key: HADOOP-7682
 URL: https://issues.apache.org/jira/browse/HADOOP-7682
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.1
 Environment: OS:WindowsXP SP3 , Filesystem :NTFS, cygwin 1.7.9-1, 
 jdk1.6.0_05
Reporter: Magic Xie

 ERROR org.apache.hadoop.mapred.TaskTracker:Can not start task tracker because 
 java.io.IOException:Failed to set permissions of 
 path:/tmp/hadoop-cyg_server/mapred/local/ttprivate to 0700
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFileSystem.java:525)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:318)
 at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
 at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:635)
 at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1328)
 at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3430)
 Since hadoop0.20.203 when the TaskTracker initialize, it checks the 
 permission(TaskTracker Line 624) of 
 (org.apache.hadoop.mapred.TaskTracker.TT_LOG_TMP_DIR,org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR,
  
 org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR).RawLocalFileSystem(http://svn.apache.org/viewvc/hadoop/common/tags/release-0.20.203.0/src/core/org/apache/hadoop/fs/RawLocalFileSystem.java?view=markup)
  call setPermission(Line 481) to deal with it, setPermission works fine on 
 *nx, however,it dose not alway works on windows.
 setPermission call setReadable of Java.io.File in the line 498, but according 
 to the Table1 below provided by oracle,setReadable(false) will always return 
 false on windows, the same as setExecutable(false).
 http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/enhancements/
 is it cause the task tracker Failed to set permissions to ttprivate to 
 0700?
 Hadoop 0.20.202 works fine in the same environment. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9871) Fix intermittent findbug warnings in DefaultMetricsSystem

2013-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13739235#comment-13739235
 ] 

Hadoop QA commented on HADOOP-9871:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12597887/HADOOP-9871.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2979//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2979//console

This message is automatically generated.

 Fix intermittent findbug warnings in DefaultMetricsSystem
 -

 Key: HADOOP-9871
 URL: https://issues.apache.org/jira/browse/HADOOP-9871
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Luke Lu
Assignee: Junping Du
Priority: Minor
 Attachments: HADOOP-9871.patch


 Findbugs sometimes picks up warnings from DefaultMetricsSystem due to some of 
 the fields not being transient for serializable class (DefaultMetricsSystem 
 is an Enum (which is serializable)). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira