[jira] [Commented] (HADOOP-11001) Fix test-patch to work with the git repo

2014-08-29 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14114899#comment-14114899
 ] 

Karthik Kambatla commented on HADOOP-11001:
---

Are we sure it is the {{git reset --hard}} and not {{git clean -xdf}}? 

Is it okay to create the patchprocess directory after these calls? 

 Fix test-patch to work with the git repo
 

 Key: HADOOP-11001
 URL: https://issues.apache.org/jira/browse/HADOOP-11001
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Fix For: 2.5.1

 Attachments: hadoop-11001-1.patch, hadoop-11001-2.patch


 We want the precommit builds to run against the git repo after the 
 transition. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8815) RandomDatum overrides equals(Object) but no hashCode()

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115139#comment-14115139
 ] 

Hudson commented on HADOOP-8815:


FAILURE: Integrated in Hadoop-Yarn-trunk #663 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/663/])
Fixing CHANGES.txt, moving HADOOP-8815 to 2.6.0 release (tucu: rev 
88c5e2141c4e85c2cac9463aaf68091a0e93302e)
* hadoop-common-project/hadoop-common/CHANGES.txt


 RandomDatum overrides equals(Object) but no hashCode()
 --

 Key: HADOOP-8815
 URL: https://issues.apache.org/jira/browse/HADOOP-8815
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-8815.patch, HADOOP-8815.patch


 Override equal() but not hashCode() is a violation of the general contract 
 for Object.hashCode will occur, which can have unexpected repercussions when 
 this class is in conjunction with all hash-based collections.
 This test class is used in multiple places, so it may be worth fixing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11013) CLASSPATH handling should be consolidated, debuggable

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115148#comment-14115148
 ] 

Hudson commented on HADOOP-11013:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #663 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/663/])
HADOOP-11013. CLASSPATH handling should be consolidated, debuggable (aw) (aw: 
rev d8774cc577198fdc3bc36c26526c95ea9a989800)
* hadoop-common-project/hadoop-common/src/main/bin/rcc
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-yarn-project/hadoop-yarn/bin/yarn
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* hadoop-common-project/hadoop-common/src/main/bin/hadoop
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* hadoop-mapreduce-project/bin/mapred
* hadoop-common-project/hadoop-common/CHANGES.txt


 CLASSPATH handling should be consolidated, debuggable
 -

 Key: HADOOP-11013
 URL: https://issues.apache.org/jira/browse/HADOOP-11013
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11013-01.patch, HADOOP-11013.patch


 As part of HADOOP-9902, java execution across many different shell bits were 
 consolidated down to (effectively) two routines.  Prior to calling those two 
 routines, the CLASSPATH is exported.  This export should really be getting 
 handled in the exec function and not in the individual shell bits.
 Additionally, it would be good if there was:
 {code}
 echo ${CLASSPATH}  /dev/null
 {code}
 so that bash -x would show the content of the classpath or even a '--debug 
 classpath' option that would echo the classpath to the screen prior to java 
 exec to help with debugging.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10150) Hadoop cryptographic file system

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115141#comment-14115141
 ] 

Hudson commented on HADOOP-10150:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #663 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/663/])
Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following 
merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-mapreduce-project/CHANGES.txt


 Hadoop cryptographic file system
 

 Key: HADOOP-10150
 URL: https://issues.apache.org/jira/browse/HADOOP-10150
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
  Labels: rhino
 Fix For: 2.6.0

 Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
 system-V2.docx, HADOOP cryptographic file system.pdf, 
 HDFSDataAtRestEncryptionAlternatives.pdf, 
 HDFSDataatRestEncryptionAttackVectors.pdf, 
 HDFSDataatRestEncryptionProposal.pdf, cfs.patch, extended information based 
 on INode feature.patch


 There is an increasing need for securing data when Hadoop customers use 
 various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
 on.
 HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
 on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
 transparent to upper layer applications. It’s configurable, scalable and fast.
 High level requirements:
 1.Transparent to and no modification required for upper layer 
 applications.
 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
 the wrapped file system supports them.
 3.Very high performance for encryption and decryption, they will not 
 become bottleneck.
 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
 modify existing structure of file system, such as namenode and datanode 
 structure if the wrapped file system is HDFS.
 5.Admin can configure encryption policies, such as which directory will 
 be encrypted.
 6.A robust key management framework.
 7.Support Pread and append operations if the wrapped file system supports 
 them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11005) Fix HTTP content type for ReconfigurationServlet

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115140#comment-14115140
 ] 

Hudson commented on HADOOP-11005:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #663 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/663/])
HADOOP-11005. Fix HTTP content type for ReconfigurationServlet. Contributed by 
Lei Xu. (andrew.wang: rev 7119bd49c870cf1e6b8c091d87025b439b9468df)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java


 Fix HTTP content type for ReconfigurationServlet
 

 Key: HADOOP-11005
 URL: https://issues.apache.org/jira/browse/HADOOP-11005
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.5.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11005.000.patch, HADOOP-11005.000.patch


 The reconfiguration framework introduced from HDFS-7001 supports reload 
 configuration from HTTP servlet, using {{ReconfigurableServlet}}. 
 {{ReconfigurableServlet}} processes a HTTP GET request to list the 
 differences between old and new configurations in HTML, with a form that 
 allows the user to submit to confirm the configuration changes. However since 
 the response lacks HTTP content-type, the browser renders the page as text 
 file, which makes it impossible to submit the form. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10880) Move HTTP delegation tokens out of URL querystring to a header

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115146#comment-14115146
 ] 

Hudson commented on HADOOP-10880:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #663 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/663/])
HADOOP-10880. Move HTTP delegation tokens out of URL querystring to a header. 
(tucu) (tucu: rev d1ae479aa5ae4d3e7ec80e35892e1699c378f813)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenAuthenticationHandlerWithMocks.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticatedURL.java


 Move HTTP delegation tokens out of URL querystring to a header
 --

 Key: HADOOP-10880
 URL: https://issues.apache.org/jira/browse/HADOOP-10880
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-10880.patch, HADOOP-10880.patch, 
 HADOOP-10880.patch, HADOOP-10880.patch, HADOOP-10880.patch


 Following up on a discussion in HADOOP-10799.
 Because URLs are often logged, delegation tokens may end up in LOG files 
 while they are still valid. 
 We should move the tokens to a header.
 We should still support tokens in the querystring for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10922) User documentation for CredentialShell

2014-08-29 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115190#comment-14115190
 ] 

Larry McCay commented on HADOOP-10922:
--

I see, [~andrew.wang]. I'll try and get them both done but may file a separate 
jira for the command manual if we have to break it up into separate passes.

 User documentation for CredentialShell
 --

 Key: HADOOP-10922
 URL: https://issues.apache.org/jira/browse/HADOOP-10922
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Larry McCay

 The CredentialShell needs end user documentation for the website.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8815) RandomDatum overrides equals(Object) but no hashCode()

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115252#comment-14115252
 ] 

Hudson commented on HADOOP-8815:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1854 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1854/])
Fixing CHANGES.txt, moving HADOOP-8815 to 2.6.0 release (tucu: rev 
88c5e2141c4e85c2cac9463aaf68091a0e93302e)
* hadoop-common-project/hadoop-common/CHANGES.txt


 RandomDatum overrides equals(Object) but no hashCode()
 --

 Key: HADOOP-8815
 URL: https://issues.apache.org/jira/browse/HADOOP-8815
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-8815.patch, HADOOP-8815.patch


 Override equal() but not hashCode() is a violation of the general contract 
 for Object.hashCode will occur, which can have unexpected repercussions when 
 this class is in conjunction with all hash-based collections.
 This test class is used in multiple places, so it may be worth fixing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11005) Fix HTTP content type for ReconfigurationServlet

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115253#comment-14115253
 ] 

Hudson commented on HADOOP-11005:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1854 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1854/])
HADOOP-11005. Fix HTTP content type for ReconfigurationServlet. Contributed by 
Lei Xu. (andrew.wang: rev 7119bd49c870cf1e6b8c091d87025b439b9468df)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix HTTP content type for ReconfigurationServlet
 

 Key: HADOOP-11005
 URL: https://issues.apache.org/jira/browse/HADOOP-11005
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.5.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11005.000.patch, HADOOP-11005.000.patch


 The reconfiguration framework introduced from HDFS-7001 supports reload 
 configuration from HTTP servlet, using {{ReconfigurableServlet}}. 
 {{ReconfigurableServlet}} processes a HTTP GET request to list the 
 differences between old and new configurations in HTML, with a form that 
 allows the user to submit to confirm the configuration changes. However since 
 the response lacks HTTP content-type, the browser renders the page as text 
 file, which makes it impossible to submit the form. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10150) Hadoop cryptographic file system

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115254#comment-14115254
 ] 

Hudson commented on HADOOP-10150:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1854 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1854/])
Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following 
merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-mapreduce-project/CHANGES.txt
* hadoop-common-project/hadoop-common/CHANGES.txt


 Hadoop cryptographic file system
 

 Key: HADOOP-10150
 URL: https://issues.apache.org/jira/browse/HADOOP-10150
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
  Labels: rhino
 Fix For: 2.6.0

 Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
 system-V2.docx, HADOOP cryptographic file system.pdf, 
 HDFSDataAtRestEncryptionAlternatives.pdf, 
 HDFSDataatRestEncryptionAttackVectors.pdf, 
 HDFSDataatRestEncryptionProposal.pdf, cfs.patch, extended information based 
 on INode feature.patch


 There is an increasing need for securing data when Hadoop customers use 
 various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
 on.
 HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
 on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
 transparent to upper layer applications. It’s configurable, scalable and fast.
 High level requirements:
 1.Transparent to and no modification required for upper layer 
 applications.
 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
 the wrapped file system supports them.
 3.Very high performance for encryption and decryption, they will not 
 become bottleneck.
 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
 modify existing structure of file system, such as namenode and datanode 
 structure if the wrapped file system is HDFS.
 5.Admin can configure encryption policies, such as which directory will 
 be encrypted.
 6.A robust key management framework.
 7.Support Pread and append operations if the wrapped file system supports 
 them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11013) CLASSPATH handling should be consolidated, debuggable

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115261#comment-14115261
 ] 

Hudson commented on HADOOP-11013:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1854 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1854/])
HADOOP-11013. CLASSPATH handling should be consolidated, debuggable (aw) (aw: 
rev d8774cc577198fdc3bc36c26526c95ea9a989800)
* hadoop-yarn-project/hadoop-yarn/bin/yarn
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/rcc
* hadoop-mapreduce-project/bin/mapred


 CLASSPATH handling should be consolidated, debuggable
 -

 Key: HADOOP-11013
 URL: https://issues.apache.org/jira/browse/HADOOP-11013
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11013-01.patch, HADOOP-11013.patch


 As part of HADOOP-9902, java execution across many different shell bits were 
 consolidated down to (effectively) two routines.  Prior to calling those two 
 routines, the CLASSPATH is exported.  This export should really be getting 
 handled in the exec function and not in the individual shell bits.
 Additionally, it would be good if there was:
 {code}
 echo ${CLASSPATH}  /dev/null
 {code}
 so that bash -x would show the content of the classpath or even a '--debug 
 classpath' option that would echo the classpath to the screen prior to java 
 exec to help with debugging.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10880) Move HTTP delegation tokens out of URL querystring to a header

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115259#comment-14115259
 ] 

Hudson commented on HADOOP-10880:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1854 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1854/])
HADOOP-10880. Move HTTP delegation tokens out of URL querystring to a header. 
(tucu) (tucu: rev d1ae479aa5ae4d3e7ec80e35892e1699c378f813)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticatedURL.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenAuthenticationHandlerWithMocks.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java


 Move HTTP delegation tokens out of URL querystring to a header
 --

 Key: HADOOP-10880
 URL: https://issues.apache.org/jira/browse/HADOOP-10880
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-10880.patch, HADOOP-10880.patch, 
 HADOOP-10880.patch, HADOOP-10880.patch, HADOOP-10880.patch


 Following up on a discussion in HADOOP-10799.
 Because URLs are often logged, delegation tokens may end up in LOG files 
 while they are still valid. 
 We should move the tokens to a header.
 We should still support tokens in the querystring for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11024) DEFAULT_YARN_APPLICATION_CLASSPATH doesn't honor hadoop-layout.sh

2014-08-29 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11024:
-

 Summary: DEFAULT_YARN_APPLICATION_CLASSPATH doesn't honor 
hadoop-layout.sh
 Key: HADOOP-11024
 URL: https://issues.apache.org/jira/browse/HADOOP-11024
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer


In 0.21, hadoop-layout.sh was introduced to allow for vendors to reorganize the 
Hadoop distribution in a way that pleases them.  
DEFAULT_YARN_APPLICATION_CLASSPATH hard-codes the paths that hadoop-layout.sh 
was meant to override.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115381#comment-14115381
 ] 

Hadoop QA commented on HADOOP-11015:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision 4ae8178.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4590//console

This message is automatically generated.

 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115382#comment-14115382
 ] 

Hadoop QA commented on HADOOP-11015:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision 4ae8178.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4591//console

This message is automatically generated.

 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115392#comment-14115392
 ] 

Hadoop QA commented on HADOOP-11015:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12665019/HADOOP-11015.patch
  against trunk revision 4ae8178.

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4592//console

This message is automatically generated.

 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11025) hadoop-daemons.sh should just call hdfs directly

2014-08-29 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11025:
-

 Summary: hadoop-daemons.sh should just call hdfs directly
 Key: HADOOP-11025
 URL: https://issues.apache.org/jira/browse/HADOOP-11025
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer


There is little-to-no reason for it to call hadoop-daemon.sh anymore.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115409#comment-14115409
 ] 

Alejandro Abdelnur edited comment on HADOOP-10911 at 8/29/14 4:30 PM:
--

+1, pending jenkins. Greg, thanks for you patience and for doing all those 
combinations of scenarios in the tests.


was (Author: tucu00):
+1. Greg, thanks for you patience and for doing all those combinations of 
scenarios in the tests.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
 HADOOP-10911v2.patch, HADOOP-10911v3.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-08-29 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10911:


Status: Patch Available  (was: Open)

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
 HADOOP-10911v2.patch, HADOOP-10911v3.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115409#comment-14115409
 ] 

Alejandro Abdelnur commented on HADOOP-10911:
-

+1. Greg, thanks for you patience and for doing all those combinations of 
scenarios in the tests.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
 HADOOP-10911v2.patch, HADOOP-10911v3.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11020) TestRefreshUserMappings fails

2014-08-29 Thread Stephen Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115417#comment-14115417
 ] 

Stephen Chu commented on HADOOP-11020:
--

Same as HDFS-6972, which Yongjun has submitted a patch for?

 TestRefreshUserMappings fails
 -

 Key: HADOOP-11020
 URL: https://issues.apache.org/jira/browse/HADOOP-11020
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He

 Error Message
 /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build%402/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/testGroupMappingRefresh_rsrc.xml
  (No such file or directory)
 Stacktrace
 java.io.FileNotFoundException: 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build%402/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/testGroupMappingRefresh_rsrc.xml
  (No such file or directory)
   at java.io.FileOutputStream.open(Native Method)
   at java.io.FileOutputStream.init(FileOutputStream.java:194)
   at java.io.FileOutputStream.init(FileOutputStream.java:84)
   at 
 org.apache.hadoop.security.TestRefreshUserMappings.addNewConfigResource(TestRefreshUserMappings.java:242)
   at 
 org.apache.hadoop.security.TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration(TestRefreshUserMappings.java:203)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8815) RandomDatum overrides equals(Object) but no hashCode()

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115424#comment-14115424
 ] 

Hudson commented on HADOOP-8815:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1880 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1880/])
Fixing CHANGES.txt, moving HADOOP-8815 to 2.6.0 release (tucu: rev 
88c5e2141c4e85c2cac9463aaf68091a0e93302e)
* hadoop-common-project/hadoop-common/CHANGES.txt


 RandomDatum overrides equals(Object) but no hashCode()
 --

 Key: HADOOP-8815
 URL: https://issues.apache.org/jira/browse/HADOOP-8815
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-8815.patch, HADOOP-8815.patch


 Override equal() but not hashCode() is a violation of the general contract 
 for Object.hashCode will occur, which can have unexpected repercussions when 
 this class is in conjunction with all hash-based collections.
 This test class is used in multiple places, so it may be worth fixing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11013) CLASSPATH handling should be consolidated, debuggable

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115433#comment-14115433
 ] 

Hudson commented on HADOOP-11013:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1880 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1880/])
HADOOP-11013. CLASSPATH handling should be consolidated, debuggable (aw) (aw: 
rev d8774cc577198fdc3bc36c26526c95ea9a989800)
* hadoop-common-project/hadoop-common/src/main/bin/hadoop
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/src/main/bin/rcc
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* hadoop-mapreduce-project/bin/mapred
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-yarn-project/hadoop-yarn/bin/yarn


 CLASSPATH handling should be consolidated, debuggable
 -

 Key: HADOOP-11013
 URL: https://issues.apache.org/jira/browse/HADOOP-11013
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11013-01.patch, HADOOP-11013.patch


 As part of HADOOP-9902, java execution across many different shell bits were 
 consolidated down to (effectively) two routines.  Prior to calling those two 
 routines, the CLASSPATH is exported.  This export should really be getting 
 handled in the exec function and not in the individual shell bits.
 Additionally, it would be good if there was:
 {code}
 echo ${CLASSPATH}  /dev/null
 {code}
 so that bash -x would show the content of the classpath or even a '--debug 
 classpath' option that would echo the classpath to the screen prior to java 
 exec to help with debugging.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10880) Move HTTP delegation tokens out of URL querystring to a header

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115431#comment-14115431
 ] 

Hudson commented on HADOOP-10880:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1880 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1880/])
HADOOP-10880. Move HTTP delegation tokens out of URL querystring to a header. 
(tucu) (tucu: rev d1ae479aa5ae4d3e7ec80e35892e1699c378f813)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticatedURL.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenAuthenticationHandlerWithMocks.java


 Move HTTP delegation tokens out of URL querystring to a header
 --

 Key: HADOOP-10880
 URL: https://issues.apache.org/jira/browse/HADOOP-10880
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-10880.patch, HADOOP-10880.patch, 
 HADOOP-10880.patch, HADOOP-10880.patch, HADOOP-10880.patch


 Following up on a discussion in HADOOP-10799.
 Because URLs are often logged, delegation tokens may end up in LOG files 
 while they are still valid. 
 We should move the tokens to a header.
 We should still support tokens in the querystring for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10150) Hadoop cryptographic file system

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115426#comment-14115426
 ] 

Hudson commented on HADOOP-10150:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1880 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1880/])
Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following 
merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-mapreduce-project/CHANGES.txt
* hadoop-common-project/hadoop-common/CHANGES.txt


 Hadoop cryptographic file system
 

 Key: HADOOP-10150
 URL: https://issues.apache.org/jira/browse/HADOOP-10150
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu
  Labels: rhino
 Fix For: 2.6.0

 Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
 system-V2.docx, HADOOP cryptographic file system.pdf, 
 HDFSDataAtRestEncryptionAlternatives.pdf, 
 HDFSDataatRestEncryptionAttackVectors.pdf, 
 HDFSDataatRestEncryptionProposal.pdf, cfs.patch, extended information based 
 on INode feature.patch


 There is an increasing need for securing data when Hadoop customers use 
 various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
 on.
 HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
 on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
 transparent to upper layer applications. It’s configurable, scalable and fast.
 High level requirements:
 1.Transparent to and no modification required for upper layer 
 applications.
 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
 the wrapped file system supports them.
 3.Very high performance for encryption and decryption, they will not 
 become bottleneck.
 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
 modify existing structure of file system, such as namenode and datanode 
 structure if the wrapped file system is HDFS.
 5.Admin can configure encryption policies, such as which directory will 
 be encrypted.
 6.A robust key management framework.
 7.Support Pread and append operations if the wrapped file system supports 
 them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11005) Fix HTTP content type for ReconfigurationServlet

2014-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115425#comment-14115425
 ] 

Hudson commented on HADOOP-11005:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1880 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1880/])
HADOOP-11005. Fix HTTP content type for ReconfigurationServlet. Contributed by 
Lei Xu. (andrew.wang: rev 7119bd49c870cf1e6b8c091d87025b439b9468df)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/ReconfigurationServlet.java


 Fix HTTP content type for ReconfigurationServlet
 

 Key: HADOOP-11005
 URL: https://issues.apache.org/jira/browse/HADOOP-11005
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.5.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11005.000.patch, HADOOP-11005.000.patch


 The reconfiguration framework introduced from HDFS-7001 supports reload 
 configuration from HTTP servlet, using {{ReconfigurableServlet}}. 
 {{ReconfigurableServlet}} processes a HTTP GET request to list the 
 differences between old and new configurations in HTML, with a form that 
 allows the user to submit to confirm the configuration changes. However since 
 the response lacks HTTP content-type, the browser renders the page as text 
 file, which makes it impossible to submit the form. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11022) User replaced functions get lost 2-3 levels deep (e.g., sbin)

2014-08-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115445#comment-14115445
 ] 

Allen Wittenauer commented on HADOOP-11022:
---

Cancelling this patch since the example rotate function is incorrect.

 User replaced functions get lost 2-3 levels deep (e.g., sbin)
 -

 Key: HADOOP-11022
 URL: https://issues.apache.org/jira/browse/HADOOP-11022
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Critical
 Attachments: HADOOP-11022.patch


 The code that protects hadoop-env.sh from being re-executed is also causing 
 functions that the user replaced to get overridden with the defaults.  This 
 typically happens when running commands that nest, such as most of the 
 content in sbin.  Just running stuff out of bin (e.g., bin/hdfs --daemon 
 start namenode) does not trigger this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11022) User replaced functions get lost 2-3 levels deep (e.g., sbin)

2014-08-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11022:
--

Status: Patch Available  (was: Open)

Argh, no the patch is fine. NM. not awake yet.

 User replaced functions get lost 2-3 levels deep (e.g., sbin)
 -

 Key: HADOOP-11022
 URL: https://issues.apache.org/jira/browse/HADOOP-11022
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Critical
 Attachments: HADOOP-11022.patch


 The code that protects hadoop-env.sh from being re-executed is also causing 
 functions that the user replaced to get overridden with the defaults.  This 
 typically happens when running commands that nest, such as most of the 
 content in sbin.  Just running stuff out of bin (e.g., bin/hdfs --daemon 
 start namenode) does not trigger this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115475#comment-14115475
 ] 

Hadoop QA commented on HADOOP-11015:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12665019/HADOOP-11015.patch
  against trunk revision 4ae8178.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms 
hadoop-hdfs-project/hadoop-hdfs-httpfs:

  
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks

  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-common-project/hadoop-kms 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4593//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4593//console

This message is automatically generated.

 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115474#comment-14115474
 ] 

Hadoop QA commented on HADOOP-10911:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12664529/HADOOP-10911v3.patch
  against trunk revision 4ae8178.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4594//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4594//console

This message is automatically generated.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
 HADOOP-10911v2.patch, HADOOP-10911v3.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10946) Fix a bunch of typos in log messages

2014-08-29 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115496#comment-14115496
 ] 

Ray Chiang commented on HADOOP-10946:
-

Forgot to link HDFS-6942.

 Fix a bunch of typos in log messages
 

 Key: HADOOP-10946
 URL: https://issues.apache.org/jira/browse/HADOOP-10946
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.4.1
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10946-04.patch, HADOOP-10946-05.patch, 
 HADOOP-10946-06.patch, HADOOP10946-01.patch, HADOOP10946-02.patch, 
 HADOOP10946-03.patch


 There are a bunch of typos in various log messages.  These need cleaning up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115589#comment-14115589
 ] 

Hadoop QA commented on HADOOP-11015:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12665019/HADOOP-11015.patch
  against trunk revision 4bd0194.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms 
hadoop-hdfs-project/hadoop-hdfs-httpfs:

  
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
  org.apache.hadoop.ha.TestZKFailoverControllerStress

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4595//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4595//console

This message is automatically generated.

 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-08-29 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10911:


   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Greg. Committed to trunk and branch-2.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Fix For: 2.6.0

 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
 HADOOP-10911v2.patch, HADOOP-10911v3.patch


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10994) KeyProviderCryptoExtension should use CryptoCodec for generation/decryption of keys

2014-08-29 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10994:


Attachment: HADOOP-10994.patch

fixing testcases that are using Mocks and were not returning a conf

 KeyProviderCryptoExtension should use CryptoCodec for generation/decryption 
 of keys
 ---

 Key: HADOOP-10994
 URL: https://issues.apache.org/jira/browse/HADOOP-10994
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10994.patch, HADOOP-10994.patch, 
 HADOOP-10994.patch


 Currently is using JDK Cipher, with fs-encryption branch merged into trunk we 
 can swap to CryptoCodec.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11026) add FileSystem contract specification for FSDataInputStream and FSDataOutputStream#isEncrypted

2014-08-29 Thread Charles Lamb (JIRA)
Charles Lamb created HADOOP-11026:
-

 Summary: add FileSystem contract specification for 
FSDataInputStream and FSDataOutputStream#isEncrypted
 Key: HADOOP-11026
 URL: https://issues.apache.org/jira/browse/HADOOP-11026
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, test
Affects Versions: 3.0.0, 2.6.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor


Following on to HDFS-6843, the contract specification for FSDataInputStream and 
FSDataOutputStream needs to be updated to reflect the addition of isEncrypted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10814) Update Tomcat version used by HttpFS and KMS to latest 6.x version

2014-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115650#comment-14115650
 ] 

Alejandro Abdelnur commented on HADOOP-10814:
-

+1

 Update Tomcat version used by HttpFS and KMS to latest 6.x version
 --

 Key: HADOOP-10814
 URL: https://issues.apache.org/jira/browse/HADOOP-10814
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Robert Kanter
 Attachments: HADOOP-10814.patch


 KMS and HttpFS are using Tomcat 6.0.37, we should move it to 6.0.41 to get 
 bug fixes and security fixes.
 We should add a property with the tomcat version in the hadoop-project POM 
 and use that property from KMS and HttpFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10814) Update Tomcat version used by HttpFS and KMS to latest 6.x version

2014-08-29 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10814:


   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Robert. Committed to trunk and branch-2.

 Update Tomcat version used by HttpFS and KMS to latest 6.x version
 --

 Key: HADOOP-10814
 URL: https://issues.apache.org/jira/browse/HADOOP-10814
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Robert Kanter
 Fix For: 2.6.0

 Attachments: HADOOP-10814.patch


 KMS and HttpFS are using Tomcat 6.0.37, we should move it to 6.0.41 to get 
 bug fixes and security fixes.
 We should add a property with the tomcat version in the hadoop-project POM 
 and use that property from KMS and HttpFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11026) add FileSystem contract specification for FSDataInputStream and FSDataOutputStream#isEncrypted

2014-08-29 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HADOOP-11026:
--

Attachment: HADOOP-11026-prelim.001.patch

Here are some preliminary diffs for the FSDataInputStream#isEncrypted doc and 
the AbstractConcreteOpenTest update. There is no equivalent to 
fsdatainputstream.md for the output side. The closest is filesystem.md. Let me 
know where you think the FSDataOutputStream doc should go.


 add FileSystem contract specification for FSDataInputStream and 
 FSDataOutputStream#isEncrypted
 --

 Key: HADOOP-11026
 URL: https://issues.apache.org/jira/browse/HADOOP-11026
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, test
Affects Versions: 3.0.0, 2.6.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor
 Attachments: HADOOP-11026-prelim.001.patch


 Following on to HDFS-6843, the contract specification for FSDataInputStream 
 and FSDataOutputStream needs to be updated to reflect the addition of 
 isEncrypted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HADOOP-11026) add FileSystem contract specification for FSDataInputStream and FSDataOutputStream#isEncrypted

2014-08-29 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11026 started by Charles Lamb.

 add FileSystem contract specification for FSDataInputStream and 
 FSDataOutputStream#isEncrypted
 --

 Key: HADOOP-11026
 URL: https://issues.apache.org/jira/browse/HADOOP-11026
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, test
Affects Versions: 3.0.0, 2.6.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor
 Attachments: HADOOP-11026-prelim.001.patch


 Following on to HDFS-6843, the contract specification for FSDataInputStream 
 and FSDataOutputStream needs to be updated to reflect the addition of 
 isEncrypted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11027) hadoop_verify_secure_prereq doesn't work for non-default setups

2014-08-29 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11027:
-

 Summary: hadoop_verify_secure_prereq doesn't work for non-default 
setups
 Key: HADOOP-11027
 URL: https://issues.apache.org/jira/browse/HADOOP-11027
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Critical


If you enable HADOOP_SECURE_COMMAND to override jsvc, 
hadoop_verify_secure_prereq  fails.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10994) KeyProviderCryptoExtension should use CryptoCodec for generation/decryption of keys

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115711#comment-14115711
 ] 

Hadoop QA commented on HADOOP-10994:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12665386/HADOOP-10994.patch
  against trunk revision c686aa3.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4596//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4596//console

This message is automatically generated.

 KeyProviderCryptoExtension should use CryptoCodec for generation/decryption 
 of keys
 ---

 Key: HADOOP-10994
 URL: https://issues.apache.org/jira/browse/HADOOP-10994
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10994.patch, HADOOP-10994.patch, 
 HADOOP-10994.patch


 Currently is using JDK Cipher, with fs-encryption branch merged into trunk we 
 can swap to CryptoCodec.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-08-29 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11015:


Attachment: HADOOP-11015.patch

fixing testcase mock to look for exception back in the right place after 
HADOOP-10880.

 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch, HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10994) KeyProviderCryptoExtension should use CryptoCodec for generation/decryption of keys

2014-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115752#comment-14115752
 ] 

Alejandro Abdelnur commented on HADOOP-10994:
-

test failure unrelated

 KeyProviderCryptoExtension should use CryptoCodec for generation/decryption 
 of keys
 ---

 Key: HADOOP-10994
 URL: https://issues.apache.org/jira/browse/HADOOP-10994
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10994.patch, HADOOP-10994.patch, 
 HADOOP-10994.patch


 Currently is using JDK Cipher, with fs-encryption branch merged into trunk we 
 can swap to CryptoCodec.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11027) HADOOP_SECURE_COMMAND catch-all

2014-08-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11027:
--

Description: Enabling HADOOP_SECURE_COMMAND to override jsvc doesn't work.  
Here's a list of issues!  (was: If you enable HADOOP_SECURE_COMMAND to override 
jsvc, hadoop_verify_secure_prereq  fails.)

 HADOOP_SECURE_COMMAND catch-all
 ---

 Key: HADOOP-11027
 URL: https://issues.apache.org/jira/browse/HADOOP-11027
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Critical

 Enabling HADOOP_SECURE_COMMAND to override jsvc doesn't work.  Here's a list 
 of issues!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11027) HADOOP_SECURE_COMMAND catch-all

2014-08-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11027:
--

Summary: HADOOP_SECURE_COMMAND catch-all  (was: hadoop_verify_secure_prereq 
doesn't work for non-default setups)

 HADOOP_SECURE_COMMAND catch-all
 ---

 Key: HADOOP-11027
 URL: https://issues.apache.org/jira/browse/HADOOP-11027
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Critical

 If you enable HADOOP_SECURE_COMMAND to override jsvc, 
 hadoop_verify_secure_prereq  fails.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11027) HADOOP_SECURE_COMMAND catch-all

2014-08-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115763#comment-14115763
 ] 

Allen Wittenauer commented on HADOOP-11027:
---

* Undocumented. ;)
* hadoop_secure_verify_prereq is broken
* su in hadoop_start_secure_daemon_wrapper assumes privilege
* priv and non-priv out and pid files are swapped?

 HADOOP_SECURE_COMMAND catch-all
 ---

 Key: HADOOP-11027
 URL: https://issues.apache.org/jira/browse/HADOOP-11027
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Critical

 Enabling HADOOP_SECURE_COMMAND to override jsvc doesn't work.  Here's a list 
 of issues!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10956) Fix create-release script to include docs in the binary

2014-08-29 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115776#comment-14115776
 ] 

Karthik Kambatla commented on HADOOP-10956:
---

The jars generated using the latest patch are at: 
http://people.apache.org/~kasha/newscript-hadoop-2.5.0

 Fix create-release script to include docs in the binary
 ---

 Key: HADOOP-10956
 URL: https://issues.apache.org/jira/browse/HADOOP-10956
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: hadoop-10956-1.patch, hadoop-10956-2.patch, 
 hadoop-10956-3.patch, hadoop-10956-4.patch


 The create-release script doesn't include docs in the binary tarball. We 
 should fix that. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11028) Use Java 7 HttpCookie to implement hadoop.auth cookie

2014-08-29 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-11028:
---

 Summary: Use Java 7 HttpCookie to implement hadoop.auth cookie
 Key: HADOOP-11028
 URL: https://issues.apache.org/jira/browse/HADOOP-11028
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai


There are various workarounds for Java 6 (e.g., HADOOP-10991) in the code to 
implement write the correct HttpCookie. These workarounds should be removed 
once Hadoop has moved to Java 7.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11029) LocalFS Statistics performs thread local call per byte written

2014-08-29 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11029:
-

Affects Version/s: 2.6.0
   2.5.0

 LocalFS Statistics performs thread local call per byte written
 --

 Key: HADOOP-11029
 URL: https://issues.apache.org/jira/browse/HADOOP-11029
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0, 2.6.0
Reporter: Gopal V
 Attachments: local-fs-locking.png


 This code is there in the hot-path of IFile writer via RawLocalFileSystem.
 !local-fs-locking.png!
 From a preliminary glance, the lock prefix calls are coming from a 
 threadlocal.get() within FileSystem.Statistics 
 {code}
/**
  * Get or create the thread-local data associated with the current thread.
  */
 private StatisticsData getThreadData() {
   StatisticsData data = threadData.get();
   if (data == null) {
 data = new StatisticsData(
 new WeakReferenceThread(Thread.currentThread()));
 threadData.set(data);
 synchronized(this) {
   if (allData == null) {
 allData = new LinkedListStatisticsData();
   }
   allData.add(data);
 }
   }
   return data;
 }
 /**
  * Increment the bytes read in the statistics
  * @param newBytes the additional bytes read
  */
 public void incrementBytesRead(long newBytes) {
   getThreadData().bytesRead += newBytes;
 }
 {code}
 This is incredibly inefficient when used from FSDataOutputStream
 {code}
 public void write(int b) throws IOException {
   out.write(b);
   position++;
   if (statistics != null) {
 statistics.incrementBytesWritten(1);
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11029) LocalFS Statistics performs thread local call per byte written

2014-08-29 Thread Gopal V (JIRA)
Gopal V created HADOOP-11029:


 Summary: LocalFS Statistics performs thread local call per byte 
written
 Key: HADOOP-11029
 URL: https://issues.apache.org/jira/browse/HADOOP-11029
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Gopal V
 Attachments: local-fs-locking.png

This code is there in the hot-path of IFile writer via RawLocalFileSystem.

!local-fs-locking.png!

From a preliminary glance, the lock prefix calls are coming from a 
threadlocal.get() within FileSystem.Statistics 

{code}
   /**
 * Get or create the thread-local data associated with the current thread.
 */
private StatisticsData getThreadData() {
  StatisticsData data = threadData.get();
  if (data == null) {
data = new StatisticsData(
new WeakReferenceThread(Thread.currentThread()));
threadData.set(data);
synchronized(this) {
  if (allData == null) {
allData = new LinkedListStatisticsData();
  }
  allData.add(data);
}
  }
  return data;
}

/**
 * Increment the bytes read in the statistics
 * @param newBytes the additional bytes read
 */
public void incrementBytesRead(long newBytes) {
  getThreadData().bytesRead += newBytes;
}
{code}

This is incredibly inefficient when used from FSDataOutputStream

{code}
public void write(int b) throws IOException {
  out.write(b);
  position++;
  if (statistics != null) {
statistics.incrementBytesWritten(1);
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11029) LocalFS Statistics performs thread local call per byte written

2014-08-29 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11029:
-

Attachment: local-fs-locking.png

 LocalFS Statistics performs thread local call per byte written
 --

 Key: HADOOP-11029
 URL: https://issues.apache.org/jira/browse/HADOOP-11029
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0, 2.6.0
Reporter: Gopal V
 Attachments: local-fs-locking.png


 This code is there in the hot-path of IFile writer via RawLocalFileSystem.
 !local-fs-locking.png!
 From a preliminary glance, the lock prefix calls are coming from a 
 threadlocal.get() within FileSystem.Statistics 
 {code}
/**
  * Get or create the thread-local data associated with the current thread.
  */
 private StatisticsData getThreadData() {
   StatisticsData data = threadData.get();
   if (data == null) {
 data = new StatisticsData(
 new WeakReferenceThread(Thread.currentThread()));
 threadData.set(data);
 synchronized(this) {
   if (allData == null) {
 allData = new LinkedListStatisticsData();
   }
   allData.add(data);
 }
   }
   return data;
 }
 /**
  * Increment the bytes read in the statistics
  * @param newBytes the additional bytes read
  */
 public void incrementBytesRead(long newBytes) {
   getThreadData().bytesRead += newBytes;
 }
 {code}
 This is incredibly inefficient when used from FSDataOutputStream
 {code}
 public void write(int b) throws IOException {
   out.write(b);
   position++;
   if (statistics != null) {
 statistics.incrementBytesWritten(1);
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11029) LocalFS Statistics performs thread local call per byte written

2014-08-29 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11029:
-

Component/s: fs

 LocalFS Statistics performs thread local call per byte written
 --

 Key: HADOOP-11029
 URL: https://issues.apache.org/jira/browse/HADOOP-11029
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.5.0, 2.6.0
Reporter: Gopal V
 Attachments: local-fs-locking.png


 This code is there in the hot-path of IFile writer via RawLocalFileSystem.
 !local-fs-locking.png!
 From a preliminary glance, the lock prefix calls are coming from a 
 threadlocal.get() within FileSystem.Statistics 
 {code}
/**
  * Get or create the thread-local data associated with the current thread.
  */
 private StatisticsData getThreadData() {
   StatisticsData data = threadData.get();
   if (data == null) {
 data = new StatisticsData(
 new WeakReferenceThread(Thread.currentThread()));
 threadData.set(data);
 synchronized(this) {
   if (allData == null) {
 allData = new LinkedListStatisticsData();
   }
   allData.add(data);
 }
   }
   return data;
 }
 /**
  * Increment the bytes read in the statistics
  * @param newBytes the additional bytes read
  */
 public void incrementBytesRead(long newBytes) {
   getThreadData().bytesRead += newBytes;
 }
 {code}
 This is incredibly inefficient when used from FSDataOutputStream
 {code}
 public void write(int b) throws IOException {
   out.write(b);
   position++;
   if (statistics != null) {
 statistics.incrementBytesWritten(1);
   }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115803#comment-14115803
 ] 

Hadoop QA commented on HADOOP-11015:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12665400/HADOOP-11015.patch
  against trunk revision 15366d9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms 
hadoop-hdfs-project/hadoop-hdfs-httpfs:

  org.apache.hadoop.ha.TestZKFailoverControllerStress

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4597//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4597//console

This message is automatically generated.

 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch, HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10994) KeyProviderCryptoExtension should use CryptoCodec for generation/decryption of keys

2014-08-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115805#comment-14115805
 ] 

Andrew Wang commented on HADOOP-10994:
--

+1 thanks again Tucu

 KeyProviderCryptoExtension should use CryptoCodec for generation/decryption 
 of keys
 ---

 Key: HADOOP-10994
 URL: https://issues.apache.org/jira/browse/HADOOP-10994
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-10994.patch, HADOOP-10994.patch, 
 HADOOP-10994.patch


 Currently is using JDK Cipher, with fs-encryption branch merged into trunk we 
 can swap to CryptoCodec.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-11027) HADOOP_SECURE_COMMAND catch-all

2014-08-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115763#comment-14115763
 ] 

Allen Wittenauer edited comment on HADOOP-11027 at 8/29/14 8:54 PM:


* Undocumented. ;)
* hadoop_secure_verify_prereq is broken
* su in hadoop_start_secure_daemon_wrapper assumes privilege
* priv and non-priv out and pid files are swapped?
* jsvc should set -java-home


was (Author: aw):
* Undocumented. ;)
* hadoop_secure_verify_prereq is broken
* su in hadoop_start_secure_daemon_wrapper assumes privilege
* priv and non-priv out and pid files are swapped?

 HADOOP_SECURE_COMMAND catch-all
 ---

 Key: HADOOP-11027
 URL: https://issues.apache.org/jira/browse/HADOOP-11027
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Allen Wittenauer
Priority: Critical

 Enabling HADOOP_SECURE_COMMAND to override jsvc doesn't work.  Here's a list 
 of issues!



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11030) Should define a jackson.version property in the POM instead of using explicit version

2014-08-29 Thread Juan Yu (JIRA)
Juan Yu created HADOOP-11030:


 Summary: Should define a jackson.version property in the POM 
instead of using explicit version
 Key: HADOOP-11030
 URL: https://issues.apache.org/jira/browse/HADOOP-11030
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10922) User documentation for CredentialShell

2014-08-29 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-10922:
-

Attachment: HADOOP-10922-1.patch

This patch provides a somewhat detailed description of the credential command.

 User documentation for CredentialShell
 --

 Key: HADOOP-10922
 URL: https://issues.apache.org/jira/browse/HADOOP-10922
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Larry McCay
 Attachments: HADOOP-10922-1.patch


 The CredentialShell needs end user documentation for the website.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10922) User documentation for CredentialShell

2014-08-29 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-10922:
-

Status: Patch Available  (was: Open)

 User documentation for CredentialShell
 --

 Key: HADOOP-10922
 URL: https://issues.apache.org/jira/browse/HADOOP-10922
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Larry McCay
 Attachments: HADOOP-10922-1.patch


 The CredentialShell needs end user documentation for the website.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10922) User documentation for CredentialShell

2014-08-29 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115819#comment-14115819
 ] 

Larry McCay commented on HADOOP-10922:
--

[~andrew.wang] Can you take a look at this patch and see if it is a good start 
for credential providers? I would like to get the key provider command done as 
well before tackling the design docs for each. Let me know if that makes sense 
to you.

 User documentation for CredentialShell
 --

 Key: HADOOP-10922
 URL: https://issues.apache.org/jira/browse/HADOOP-10922
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Larry McCay
 Attachments: HADOOP-10922-1.patch


 The CredentialShell needs end user documentation for the website.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11031) Design Document for Credential Provider API

2014-08-29 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11031:
-

Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-10922)

 Design Document for Credential Provider API
 ---

 Key: HADOOP-11031
 URL: https://issues.apache.org/jira/browse/HADOOP-11031
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Larry McCay

 Provide detailed overview of the design, intent and use of the credential 
 management API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11031) Design Document for Credential Provider API

2014-08-29 Thread Larry McCay (JIRA)
Larry McCay created HADOOP-11031:


 Summary: Design Document for Credential Provider API
 Key: HADOOP-11031
 URL: https://issues.apache.org/jira/browse/HADOOP-11031
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Larry McCay


Provide detailed overview of the design, intent and use of the credential 
management API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-11031) Design Document for Credential Provider API

2014-08-29 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay reassigned HADOOP-11031:


Assignee: Larry McCay

 Design Document for Credential Provider API
 ---

 Key: HADOOP-11031
 URL: https://issues.apache.org/jira/browse/HADOOP-11031
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Larry McCay
Assignee: Larry McCay

 Provide detailed overview of the design, intent and use of the credential 
 management API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11030) Should define a jackson.version property in the POM instead of using explicit version

2014-08-29 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HADOOP-11030:
-

Attachment: HADOOP-11030.patch

 Should define a jackson.version property in the POM instead of using explicit 
 version
 -

 Key: HADOOP-11030
 URL: https://issues.apache.org/jira/browse/HADOOP-11030
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor
 Attachments: HADOOP-11030.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10922) User documentation for CredentialShell

2014-08-29 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115831#comment-14115831
 ] 

Larry McCay commented on HADOOP-10922:
--

Filed HADOOP-11031 for design document.

 User documentation for CredentialShell
 --

 Key: HADOOP-10922
 URL: https://issues.apache.org/jira/browse/HADOOP-10922
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Larry McCay
 Attachments: HADOOP-10922-1.patch


 The CredentialShell needs end user documentation for the website.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11030) Should define a jackson.version property in the POM instead of using explicit version

2014-08-29 Thread Juan Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Yu updated HADOOP-11030:
-

Status: Patch Available  (was: Open)

 Should define a jackson.version property in the POM instead of using explicit 
 version
 -

 Key: HADOOP-11030
 URL: https://issues.apache.org/jira/browse/HADOOP-11030
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor
 Attachments: HADOOP-11030.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10956) Fix create-release script to include docs in the binary

2014-08-29 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10956:


Attachment: assembly-src-tweak.patch

[~kasha],

We should move the LICENSE/README/NOTICE TXT files from common/hdfs/yarn/mapred 
down to the root of the source.

The attached assemby-src-tweak.patch will make the src packing to pick them up.

A fast way of creating the SRC tarball is {{mvn clean package -Psrc -N}}

You'll have to modify the BIN tarball building to include the TXT files. that 
is hadoop-dist POM TAR stitching.

 Fix create-release script to include docs in the binary
 ---

 Key: HADOOP-10956
 URL: https://issues.apache.org/jira/browse/HADOOP-10956
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: assembly-src-tweak.patch, hadoop-10956-1.patch, 
 hadoop-10956-2.patch, hadoop-10956-3.patch, hadoop-10956-4.patch


 The create-release script doesn't include docs in the binary tarball. We 
 should fix that. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11015) Http server/client utils to propagate and recreate Exceptions from server to client

2014-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115842#comment-14115842
 ] 

Alejandro Abdelnur commented on HADOOP-11015:
-

test failure unrelated.

 Http server/client utils to propagate and recreate Exceptions from server to 
 client
 ---

 Key: HADOOP-11015
 URL: https://issues.apache.org/jira/browse/HADOOP-11015
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11015.patch, HADOOP-11015.patch, 
 HADOOP-11015.patch, HADOOP-11015.patch


 While doing HADOOP-10771, while discussing it with [~daryn], a suggested 
 improvement was to propagate the server side exceptions to the client in the 
 same way WebHDFS does it.
 This JIRA is to provide a utility class to do the same and refactor HttpFS 
 and KMS to use it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10994) KeyProviderCryptoExtension should use CryptoCodec for generation/decryption of keys

2014-08-29 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10994:


   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

 KeyProviderCryptoExtension should use CryptoCodec for generation/decryption 
 of keys
 ---

 Key: HADOOP-10994
 URL: https://issues.apache.org/jira/browse/HADOOP-10994
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.6.0

 Attachments: HADOOP-10994.patch, HADOOP-10994.patch, 
 HADOOP-10994.patch


 Currently is using JDK Cipher, with fs-encryption branch merged into trunk we 
 can swap to CryptoCodec.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11026) add FileSystem contract specification for FSDataInputStream and FSDataOutputStream#isEncrypted

2014-08-29 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HADOOP-11026:
--

Attachment: HADOOP-11026.001.patch

This patch adds the test case to AbstractConcreteOpenTest, and sections on 
FSDataInput/OutputStream.isEncrypted() to filesystem.md and 
fsdatainputstream.md.

 add FileSystem contract specification for FSDataInputStream and 
 FSDataOutputStream#isEncrypted
 --

 Key: HADOOP-11026
 URL: https://issues.apache.org/jira/browse/HADOOP-11026
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, test
Affects Versions: 3.0.0, 2.6.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor
 Attachments: HADOOP-11026-prelim.001.patch, HADOOP-11026.001.patch


 Following on to HDFS-6843, the contract specification for FSDataInputStream 
 and FSDataOutputStream needs to be updated to reflect the addition of 
 isEncrypted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11021) Configurable replication degree in the hadoop archive command

2014-08-29 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115867#comment-14115867
 ] 

Andrew Wang commented on HADOOP-11021:
--

+1 LGTM, will commit this shortly.

 Configurable replication degree in the hadoop archive command
 -

 Key: HADOOP-11021
 URL: https://issues.apache.org/jira/browse/HADOOP-11021
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Minor
 Attachments: HADOOP-11021.path, HDFS-6968-2.patch, HDFS-6968.patch


 Due to be below hard-coded replication degree in {{HadoopArchives}}, the 
 {{archive}} command will fail if HDFS maximum replication has already been 
 configured to a number lower than 10. 
 {code:java}
 //increase the replication of src files
 jobfs.setReplication(srcFiles, (short) 10);
 {code}
 This Jira will make the {{archive}} command configurable with desired 
 replication degree.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11030) Should define a jackson.version property in the POM instead of using explicit version

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115868#comment-14115868
 ] 

Hadoop QA commented on HADOOP-11030:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12665423/HADOOP-11030.patch
  against trunk revision b03653f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4600//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4600//console

This message is automatically generated.

 Should define a jackson.version property in the POM instead of using explicit 
 version
 -

 Key: HADOOP-11030
 URL: https://issues.apache.org/jira/browse/HADOOP-11030
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor
 Attachments: HADOOP-11030.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10956) Fix create-release script to include docs in the binary

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115869#comment-14115869
 ] 

Hadoop QA commented on HADOOP-10956:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12665426/assembly-src-tweak.patch
  against trunk revision c60da4d.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4601//console

This message is automatically generated.

 Fix create-release script to include docs in the binary
 ---

 Key: HADOOP-10956
 URL: https://issues.apache.org/jira/browse/HADOOP-10956
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: assembly-src-tweak.patch, hadoop-10956-1.patch, 
 hadoop-10956-2.patch, hadoop-10956-3.patch, hadoop-10956-4.patch


 The create-release script doesn't include docs in the binary tarball. We 
 should fix that. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11021) Configurable replication factor in the hadoop archive command

2014-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11021:
-

Summary: Configurable replication factor in the hadoop archive command  
(was: Configurable replication degree in the hadoop archive command)

 Configurable replication factor in the hadoop archive command
 -

 Key: HADOOP-11021
 URL: https://issues.apache.org/jira/browse/HADOOP-11021
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Minor
 Attachments: HADOOP-11021.path, HDFS-6968-2.patch, HDFS-6968.patch


 Due to be below hard-coded replication degree in {{HadoopArchives}}, the 
 {{archive}} command will fail if HDFS maximum replication has already been 
 configured to a number lower than 10. 
 {code:java}
 //increase the replication of src files
 jobfs.setReplication(srcFiles, (short) 10);
 {code}
 This Jira will make the {{archive}} command configurable with desired 
 replication degree.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10956) Fix create-release script to include docs in the binary

2014-08-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-10956:
--

Status: Open  (was: Patch Available)

 Fix create-release script to include docs in the binary
 ---

 Key: HADOOP-10956
 URL: https://issues.apache.org/jira/browse/HADOOP-10956
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.5.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
Priority: Blocker
 Attachments: assembly-src-tweak.patch, hadoop-10956-1.patch, 
 hadoop-10956-2.patch, hadoop-10956-3.patch, hadoop-10956-4.patch


 The create-release script doesn't include docs in the binary tarball. We 
 should fix that. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11032) Replace use of Guava Stopwatch with Apache StopWatch

2014-08-29 Thread Gary Steelman (JIRA)
Gary Steelman created HADOOP-11032:
--

 Summary: Replace use of Guava Stopwatch with Apache StopWatch
 Key: HADOOP-11032
 URL: https://issues.apache.org/jira/browse/HADOOP-11032
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Gary Steelman


This patch reduces Hadoop's dependency on an old version of guava. 
Stopwatch.elapsedMillis() isn't part of guava past v16 and the tools I'm 
working on use v17. 

To remedy this and also reduce Hadoop's reliance on old versions of guava, we 
can use the Apache StopWatch (org.apache.commons.lang.time.StopWatch) which 
provides nearly equivalent functionality. apache.commons.lang is already a 
dependency for Hadoop so this will not introduce new dependencies. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10922) User documentation for CredentialShell

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115893#comment-14115893
 ] 

Hadoop QA commented on HADOOP-10922:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12665418/HADOOP-10922-1.patch
  against trunk revision b03653f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4599//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4599//console

This message is automatically generated.

 User documentation for CredentialShell
 --

 Key: HADOOP-10922
 URL: https://issues.apache.org/jira/browse/HADOOP-10922
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Larry McCay
 Attachments: HADOOP-10922-1.patch


 The CredentialShell needs end user documentation for the website.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11012) hadoop fs -text of zero-length file causes EOFException

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115892#comment-14115892
 ] 

Hadoop QA commented on HADOOP-11012:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12665074/HDFS-6915.201408282053.txt
  against trunk revision b03653f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-common-project/hadoop-common:

org.apache.hadoop.http.TestHttpServerLifecycle

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4598//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4598//console

This message is automatically generated.

 hadoop fs -text of zero-length file causes EOFException
 ---

 Key: HADOOP-11012
 URL: https://issues.apache.org/jira/browse/HADOOP-11012
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.5.0
Reporter: Eric Payne
Assignee: Eric Payne
 Attachments: HDFS-6915.201408271824.txt, HDFS-6915.201408272144.txt, 
 HDFS-6915.201408282053.txt


 List:
 $ $HADOOP_PREFIX/bin/hadoop fs -ls /user/ericp/foo
 -rw---   3 ericp hdfs  0 2014-08-22 16:37 /user/ericp/foo
 Cat:
 $ $HADOOP_PREFIX/bin/hadoop fs -cat /user/ericp/foo
 Text:
 $ $HADOOP_PREFIX/bin/hadoop fs -text /user/ericp/foo
 text: java.io.EOFException
   at java.io.DataInputStream.readShort(DataInputStream.java:315)
   at 
 org.apache.hadoop.fs.shell.Display$Text.getInputStream(Display.java:130)
   at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:98)
   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306)
   at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
   at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11033) /bin/hdfs script ignores JAVA_HOME

2014-08-29 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-11033:
--

 Summary: /bin/hdfs script ignores JAVA_HOME
 Key: HADOOP-11033
 URL: https://issues.apache.org/jira/browse/HADOOP-11033
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
 Environment: Mac OSX with both JDK_1.7.0_67 and JDK_1.8.0_20 installed.
Reporter: Lei (Eddy) Xu


Running {{start-dfs.sh}} script should pick the java specified by JAVA_HOME 
which is defined in my {{~/.zshrc}} and {{~/.bashrc}}.  

My JAVA_HOME is
{noformat}
 $ echo $JAVA_HOME
/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
{noformat}

However, when I start a local cluster using {{start-dfs.sh}}, it reports

{noformat}
JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
2014-08-29 14:48:20,767 INFO  [main] namenode.NameNode 
(StringUtils.java:startupShutdownMessage(633)) - STARTUP_MSG:
.

STARTUP_MSG:   java = 1.8.0_20
{noformat}

It is expected to use JDK 7 instead. This bug only occurs on trunk, but not 
branch-2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11021) Configurable replication factor in the hadoop archive command

2014-08-29 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11021:
-

   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2, thanks again Zhe!

 Configurable replication factor in the hadoop archive command
 -

 Key: HADOOP-11021
 URL: https://issues.apache.org/jira/browse/HADOOP-11021
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11021.path, HDFS-6968-2.patch, HDFS-6968.patch


 Due to be below hard-coded replication degree in {{HadoopArchives}}, the 
 {{archive}} command will fail if HDFS maximum replication has already been 
 configured to a number lower than 10. 
 {code:java}
 //increase the replication of src files
 jobfs.setReplication(srcFiles, (short) 10);
 {code}
 This Jira will make the {{archive}} command configurable with desired 
 replication degree.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11033) /bin/hdfs script ignores JAVA_HOME

2014-08-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115920#comment-14115920
 ] 

Allen Wittenauer commented on HADOOP-11033:
---

This might be HADOOP-11022.

 /bin/hdfs script ignores JAVA_HOME
 --

 Key: HADOOP-11033
 URL: https://issues.apache.org/jira/browse/HADOOP-11033
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
 Environment: Mac OSX with both JDK_1.7.0_67 and JDK_1.8.0_20 
 installed.
Reporter: Lei (Eddy) Xu

 Running {{start-dfs.sh}} script should pick the java specified by JAVA_HOME 
 which is defined in my {{~/.zshrc}} and {{~/.bashrc}}.  
 My JAVA_HOME is
 {noformat}
  $ echo $JAVA_HOME
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 {noformat}
 However, when I start a local cluster using {{start-dfs.sh}}, it reports
 {noformat}
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 2014-08-29 14:48:20,767 INFO  [main] namenode.NameNode 
 (StringUtils.java:startupShutdownMessage(633)) - STARTUP_MSG:
 .
 STARTUP_MSG:   java = 1.8.0_20
 {noformat}
 It is expected to use JDK 7 instead. This bug only occurs on trunk, but not 
 branch-2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11021) Configurable replication factor in the hadoop archive command

2014-08-29 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115924#comment-14115924
 ] 

Zhe Zhang commented on HADOOP-11021:


Cool! Thanks for the great feedback on doc style etc.

 Configurable replication factor in the hadoop archive command
 -

 Key: HADOOP-11021
 URL: https://issues.apache.org/jira/browse/HADOOP-11021
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11021.path, HDFS-6968-2.patch, HDFS-6968.patch


 Due to be below hard-coded replication degree in {{HadoopArchives}}, the 
 {{archive}} command will fail if HDFS maximum replication has already been 
 configured to a number lower than 10. 
 {code:java}
 //increase the replication of src files
 jobfs.setReplication(srcFiles, (short) 10);
 {code}
 This Jira will make the {{archive}} command configurable with desired 
 replication degree.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11033) /bin/hdfs script ignores JAVA_HOME

2014-08-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115926#comment-14115926
 ] 

Allen Wittenauer commented on HADOOP-11033:
---

Oh, actually, no it's not. This is a new bug.

{code}
Darwin)
  if [[ -x /usr/libexec/java_home ]]; then
export JAVA_HOME=$(/usr/libexec/java_home)
  else
export JAVA_HOME=/Library/Java/Home
  fi
;;
{code}

This code doesn't check if JAVA_HOME is already defined.  It should.

 /bin/hdfs script ignores JAVA_HOME
 --

 Key: HADOOP-11033
 URL: https://issues.apache.org/jira/browse/HADOOP-11033
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
 Environment: Mac OSX with both JDK_1.7.0_67 and JDK_1.8.0_20 
 installed.
Reporter: Lei (Eddy) Xu

 Running {{start-dfs.sh}} script should pick the java specified by JAVA_HOME 
 which is defined in my {{~/.zshrc}} and {{~/.bashrc}}.  
 My JAVA_HOME is
 {noformat}
  $ echo $JAVA_HOME
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 {noformat}
 However, when I start a local cluster using {{start-dfs.sh}}, it reports
 {noformat}
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 2014-08-29 14:48:20,767 INFO  [main] namenode.NameNode 
 (StringUtils.java:startupShutdownMessage(633)) - STARTUP_MSG:
 .
 STARTUP_MSG:   java = 1.8.0_20
 {noformat}
 It is expected to use JDK 7 instead. This bug only occurs on trunk, but not 
 branch-2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11033) /bin/hdfs script ignores JAVA_HOME

2014-08-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11033:
--

Attachment: HADOOP-11033.patch

Try this out.


 /bin/hdfs script ignores JAVA_HOME
 --

 Key: HADOOP-11033
 URL: https://issues.apache.org/jira/browse/HADOOP-11033
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
 Environment: Mac OSX with both JDK_1.7.0_67 and JDK_1.8.0_20 
 installed.
Reporter: Lei (Eddy) Xu
 Attachments: HADOOP-11033.patch


 Running {{start-dfs.sh}} script should pick the java specified by JAVA_HOME 
 which is defined in my {{~/.zshrc}} and {{~/.bashrc}}.  
 My JAVA_HOME is
 {noformat}
  $ echo $JAVA_HOME
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 {noformat}
 However, when I start a local cluster using {{start-dfs.sh}}, it reports
 {noformat}
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 2014-08-29 14:48:20,767 INFO  [main] namenode.NameNode 
 (StringUtils.java:startupShutdownMessage(633)) - STARTUP_MSG:
 .
 STARTUP_MSG:   java = 1.8.0_20
 {noformat}
 It is expected to use JDK 7 instead. This bug only occurs on trunk, but not 
 branch-2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11033) shell scripts ignore JAVA_HOME on OS X

2014-08-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11033:
--

Summary: shell scripts ignore JAVA_HOME on OS X  (was: /bin/hdfs script 
ignores JAVA_HOME)

 shell scripts ignore JAVA_HOME on OS X
 --

 Key: HADOOP-11033
 URL: https://issues.apache.org/jira/browse/HADOOP-11033
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
 Environment: Mac OSX with both JDK_1.7.0_67 and JDK_1.8.0_20 
 installed.
Reporter: Lei (Eddy) Xu
 Attachments: HADOOP-11033.patch


 Running {{start-dfs.sh}} script should pick the java specified by JAVA_HOME 
 which is defined in my {{~/.zshrc}} and {{~/.bashrc}}.  
 My JAVA_HOME is
 {noformat}
  $ echo $JAVA_HOME
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 {noformat}
 However, when I start a local cluster using {{start-dfs.sh}}, it reports
 {noformat}
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 2014-08-29 14:48:20,767 INFO  [main] namenode.NameNode 
 (StringUtils.java:startupShutdownMessage(633)) - STARTUP_MSG:
 .
 STARTUP_MSG:   java = 1.8.0_20
 {noformat}
 It is expected to use JDK 7 instead. This bug only occurs on trunk, but not 
 branch-2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11033) shell scripts ignore JAVA_HOME on OS X

2014-08-29 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115948#comment-14115948
 ] 

Lei (Eddy) Xu commented on HADOOP-11033:


[~aw] Thanks for helping this out. I've tried your patch and it works on my 
machine.

 shell scripts ignore JAVA_HOME on OS X
 --

 Key: HADOOP-11033
 URL: https://issues.apache.org/jira/browse/HADOOP-11033
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
 Environment: Mac OSX with both JDK_1.7.0_67 and JDK_1.8.0_20 
 installed.
Reporter: Lei (Eddy) Xu
 Attachments: HADOOP-11033.patch


 Running {{start-dfs.sh}} script should pick the java specified by JAVA_HOME 
 which is defined in my {{~/.zshrc}} and {{~/.bashrc}}.  
 My JAVA_HOME is
 {noformat}
  $ echo $JAVA_HOME
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 {noformat}
 However, when I start a local cluster using {{start-dfs.sh}}, it reports
 {noformat}
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 2014-08-29 14:48:20,767 INFO  [main] namenode.NameNode 
 (StringUtils.java:startupShutdownMessage(633)) - STARTUP_MSG:
 .
 STARTUP_MSG:   java = 1.8.0_20
 {noformat}
 It is expected to use JDK 7 instead. This bug only occurs on trunk, but not 
 branch-2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11033) shell scripts ignore JAVA_HOME on OS X

2014-08-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115953#comment-14115953
 ] 

Allen Wittenauer commented on HADOOP-11033:
---

I like these easy ones. :D

 shell scripts ignore JAVA_HOME on OS X
 --

 Key: HADOOP-11033
 URL: https://issues.apache.org/jira/browse/HADOOP-11033
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
 Environment: Mac OSX with both JDK_1.7.0_67 and JDK_1.8.0_20 
 installed.
Reporter: Lei (Eddy) Xu
 Attachments: HADOOP-11033.patch


 Running {{start-dfs.sh}} script should pick the java specified by JAVA_HOME 
 which is defined in my {{~/.zshrc}} and {{~/.bashrc}}.  
 My JAVA_HOME is
 {noformat}
  $ echo $JAVA_HOME
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 {noformat}
 However, when I start a local cluster using {{start-dfs.sh}}, it reports
 {noformat}
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 2014-08-29 14:48:20,767 INFO  [main] namenode.NameNode 
 (StringUtils.java:startupShutdownMessage(633)) - STARTUP_MSG:
 .
 STARTUP_MSG:   java = 1.8.0_20
 {noformat}
 It is expected to use JDK 7 instead. This bug only occurs on trunk, but not 
 branch-2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11033) shell scripts ignore JAVA_HOME on OS X

2014-08-29 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11033:
--

Status: Patch Available  (was: Open)

 shell scripts ignore JAVA_HOME on OS X
 --

 Key: HADOOP-11033
 URL: https://issues.apache.org/jira/browse/HADOOP-11033
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
 Environment: Mac OSX with both JDK_1.7.0_67 and JDK_1.8.0_20 
 installed.
Reporter: Lei (Eddy) Xu
 Attachments: HADOOP-11033.patch


 Running {{start-dfs.sh}} script should pick the java specified by JAVA_HOME 
 which is defined in my {{~/.zshrc}} and {{~/.bashrc}}.  
 My JAVA_HOME is
 {noformat}
  $ echo $JAVA_HOME
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 {noformat}
 However, when I start a local cluster using {{start-dfs.sh}}, it reports
 {noformat}
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 2014-08-29 14:48:20,767 INFO  [main] namenode.NameNode 
 (StringUtils.java:startupShutdownMessage(633)) - STARTUP_MSG:
 .
 STARTUP_MSG:   java = 1.8.0_20
 {noformat}
 It is expected to use JDK 7 instead. This bug only occurs on trunk, but not 
 branch-2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11012) hadoop fs -text of zero-length file causes EOFException

2014-08-29 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115978#comment-14115978
 ] 

Jason Lowe commented on HADOOP-11012:
-

+1 lgtm.  Will commit this next week to give [~daryn] a chance to comment 
further.

 hadoop fs -text of zero-length file causes EOFException
 ---

 Key: HADOOP-11012
 URL: https://issues.apache.org/jira/browse/HADOOP-11012
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.5.0
Reporter: Eric Payne
Assignee: Eric Payne
 Attachments: HDFS-6915.201408271824.txt, HDFS-6915.201408272144.txt, 
 HDFS-6915.201408282053.txt


 List:
 $ $HADOOP_PREFIX/bin/hadoop fs -ls /user/ericp/foo
 -rw---   3 ericp hdfs  0 2014-08-22 16:37 /user/ericp/foo
 Cat:
 $ $HADOOP_PREFIX/bin/hadoop fs -cat /user/ericp/foo
 Text:
 $ $HADOOP_PREFIX/bin/hadoop fs -text /user/ericp/foo
 text: java.io.EOFException
   at java.io.DataInputStream.readShort(DataInputStream.java:315)
   at 
 org.apache.hadoop.fs.shell.Display$Text.getInputStream(Display.java:130)
   at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:98)
   at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306)
   at 
 org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
   at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
   at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
   at 
 org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11034) ViewFileSystem is missing getStatus(Path)

2014-08-29 Thread Gary Steelman (JIRA)
Gary Steelman created HADOOP-11034:
--

 Summary: ViewFileSystem is missing getStatus(Path)
 Key: HADOOP-11034
 URL: https://issues.apache.org/jira/browse/HADOOP-11034
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Reporter: Gary Steelman


This patch implements ViewFileSystem#getStatus(Path), which is currently 
unimplemented.

getStatus(Path) should return the FsStatus of the FileSystem backing the path. 
Currently it returns the same as getStatus(), which is a default Long.MAX_VALUE 
for capacity, 0 used, and Long.MAX_VALUE for remaining space. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work stopped] (HADOOP-11026) add FileSystem contract specification for FSDataInputStream and FSDataOutputStream#isEncrypted

2014-08-29 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11026 stopped by Charles Lamb.

 add FileSystem contract specification for FSDataInputStream and 
 FSDataOutputStream#isEncrypted
 --

 Key: HADOOP-11026
 URL: https://issues.apache.org/jira/browse/HADOOP-11026
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation, test
Affects Versions: 3.0.0, 2.6.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Minor
 Attachments: HADOOP-11026-prelim.001.patch, HADOOP-11026.001.patch


 Following on to HDFS-6843, the contract specification for FSDataInputStream 
 and FSDataOutputStream needs to be updated to reflect the addition of 
 isEncrypted.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11030) Define a variable jackson.version instead of using constant at multiple places

2014-08-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11030:
--

Summary: Define a variable jackson.version instead of using constant at 
multiple places  (was: Should define a jackson.version property in the POM 
instead of using explicit version)

 Define a variable jackson.version instead of using constant at multiple places
 --

 Key: HADOOP-11030
 URL: https://issues.apache.org/jira/browse/HADOOP-11030
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor
 Attachments: HADOOP-11030.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11030) Define a variable jackson.version instead of using constant at multiple places

2014-08-29 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14115995#comment-14115995
 ] 

Karthik Kambatla commented on HADOOP-11030:
---

+1. Committing this. 

 Define a variable jackson.version instead of using constant at multiple places
 --

 Key: HADOOP-11030
 URL: https://issues.apache.org/jira/browse/HADOOP-11030
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor
 Attachments: HADOOP-11030.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11030) Define a variable jackson.version instead of using constant at multiple places

2014-08-29 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11030:
--

   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the improvement, Juan. Just committed this to trunk and branch-2. 

 Define a variable jackson.version instead of using constant at multiple places
 --

 Key: HADOOP-11030
 URL: https://issues.apache.org/jira/browse/HADOOP-11030
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11030.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11030) Define a variable jackson.version instead of using constant at multiple places

2014-08-29 Thread Juan Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116024#comment-14116024
 ] 

Juan Yu commented on HADOOP-11030:
--

Thanks Karthik.

 Define a variable jackson.version instead of using constant at multiple places
 --

 Key: HADOOP-11030
 URL: https://issues.apache.org/jira/browse/HADOOP-11030
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Juan Yu
Assignee: Juan Yu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11030.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-11033) shell scripts ignore JAVA_HOME on OS X

2014-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116027#comment-14116027
 ] 

Hadoop QA commented on HADOOP-11033:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12665455/HADOOP-11033.patch
  against trunk revision 93010fa.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4602//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4602//console

This message is automatically generated.

 shell scripts ignore JAVA_HOME on OS X
 --

 Key: HADOOP-11033
 URL: https://issues.apache.org/jira/browse/HADOOP-11033
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
 Environment: Mac OSX with both JDK_1.7.0_67 and JDK_1.8.0_20 
 installed.
Reporter: Lei (Eddy) Xu
 Attachments: HADOOP-11033.patch


 Running {{start-dfs.sh}} script should pick the java specified by JAVA_HOME 
 which is defined in my {{~/.zshrc}} and {{~/.bashrc}}.  
 My JAVA_HOME is
 {noformat}
  $ echo $JAVA_HOME
 /Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 {noformat}
 However, when I start a local cluster using {{start-dfs.sh}}, it reports
 {noformat}
 JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home
 2014-08-29 14:48:20,767 INFO  [main] namenode.NameNode 
 (StringUtils.java:startupShutdownMessage(633)) - STARTUP_MSG:
 .
 STARTUP_MSG:   java = 1.8.0_20
 {noformat}
 It is expected to use JDK 7 instead. This bug only occurs on trunk, but not 
 branch-2.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10863) KMS should have a blacklist for decrypting EEKs

2014-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116055#comment-14116055
 ] 

Alejandro Abdelnur commented on HADOOP-10863:
-

On the patch, looks good, a few minor things:

KMS.java: introduces a few unused imports

KMSACLs.java: the hasAccess() method can be rewritten as:

{code}
  public boolean hasAccess(Type type, UserGroupInformation ugi) {
boolean access = acls.get(type).isUserAllowed(ugi);
if (access) {
  AccessControlList blacklist = blacklistedAcls.get(type);
  access = (blacklist == null) || !blacklist.isUserInList(ugi);
}
return access;
  }
{code}

Documentation is missing.

Regarding [~benoyantony], I think it makes sense normalizing ACL properties to 
follow the syntax used in the rest of Hadoop.  Regarding the uppercase 
concerns, properties are case sensitive, so if documented as (ie) CREATE, it 
should be fine. Else, we can make the ENUM parsing to be case insensitive in 
KMS.

 KMS should have a blacklist for decrypting EEKs
 ---

 Key: HADOOP-10863
 URL: https://issues.apache.org/jira/browse/HADOOP-10863
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-10863.1.patch, HADOOP-10863.2.patch, 
 HADOOP-10863.3.patch


 In particular, we'll need to put HDFS admin user there by default to prevent 
 an HDFS admin from getting file encryption keys.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10758) KMS: add ACLs on per key basis.

2014-08-29 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14116071#comment-14116071
 ] 

Alejandro Abdelnur commented on HADOOP-10758:
-

*KeyAuthorizationKeyProvider.java*:

* class javadoc, use HTML markup (for the list), else everything will be 
collapsed in one line.

* {{authorizeCreateKey()}}  {{checkAccess()}} should throw 
{{AuthorizationException}} (it extends {{IOException}}.

* {{warmUpEncryptedKeys()}} should do an initial loop just to check access on 
the whole array of names.

* IMO, read methods should be guarded as well, may of them return key material. 
In multi-tenancy environments this will be required.

* The constants should be in {{KMSConfiguration}}

*KMSACLs.java*:

* {{setKeyACLs()}}, we shouldn’t set '*' as ACL if an ACL for a key is not 
present. Because of a typo you can leave a key avail to everybody. Instead we 
should have KEY DEFAULTs.

* KEY DEFAULTs for each operation, we should have them as fallback for keys 
that do not have ACLs defined. They can set to a '*' default. At load time, if 
the value is the default '*' we should WARN in the logs that the key defaults 
are wide open.

 KMS: add ACLs on per key basis.
 ---

 Key: HADOOP-10758
 URL: https://issues.apache.org/jira/browse/HADOOP-10758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-10758.1.patch, HADOOP-10758.2.patch, 
 HADOOP-10758.3.patch, HADOOP-10758.4.patch


 The KMS server should enforce ACLs on per key basis.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-11035) distcp on mr1(branch-1) fails with NPE using a short relative source path.

2014-08-29 Thread zhihai xu (JIRA)
zhihai xu created HADOOP-11035:
--

 Summary: distcp on mr1(branch-1) fails with NPE using a short 
relative source path.
 Key: HADOOP-11035
 URL: https://issues.apache.org/jira/browse/HADOOP-11035
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Reporter: zhihai xu
Assignee: zhihai xu


distcp on mr1(branch-1) fails with NPE using a short relative source path. 
The failure is at DistCp.java, makeRelative return null at the following code:
The parameters passed to makeRelative are not same format:
root is relative path and child.getPath() is a full path.
{code}
final String dst = makeRelative(root, child.getPath());
{code}

The solution is 
change root to full path to match child.getPath().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-11034) ViewFileSystem is missing getStatus(Path)

2014-08-29 Thread Gary Steelman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Steelman updated HADOOP-11034:
---

Attachment: HADOOP-11034-branch-2-1.patch
HADOOP-11034-trunk-1.patch

 ViewFileSystem is missing getStatus(Path)
 -

 Key: HADOOP-11034
 URL: https://issues.apache.org/jira/browse/HADOOP-11034
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Reporter: Gary Steelman
 Attachments: HADOOP-11034-branch-2-1.patch, HADOOP-11034-trunk-1.patch


 This patch implements ViewFileSystem#getStatus(Path), which is currently 
 unimplemented.
 getStatus(Path) should return the FsStatus of the FileSystem backing the 
 path. Currently it returns the same as getStatus(), which is a default 
 Long.MAX_VALUE for capacity, 0 used, and Long.MAX_VALUE for remaining space. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10651) Add ability to restrict service access using IP addresses and hostnames

2014-08-29 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10651:
--

Attachment: HADOOP-10651.patch

 Add ability to restrict service access using IP addresses and hostnames
 ---

 Key: HADOOP-10651
 URL: https://issues.apache.org/jira/browse/HADOOP-10651
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.5.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HADOOP-10651.patch, HADOOP-10651.patch


 In some use cases, it make sense to authorize the usage of some services only 
 from specific hosts. Just like ACLS for Service Authorization , there can be 
 a list of hosts for each service and this list can be checked during 
 authorization. 
 Similar to ACLS, there can be a whitelist of ip and blacklist of ips. The 
 default whitelist will be * and default blacklist will be empty. It should be 
 possible to override the default whitelist and default blacklist. It should 
 be possible to define whitelist and blacklist per service.
 It should be possible to define ip ranges in blacklists and whitelists



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >