[jira] [Updated] (HDFS-4574) Move Diff and EnumCounters to util package

2013-03-08 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4574:
-

Attachment: h4574_20130308b.patch

Maven somehow cannot compile the previous patch.

h4574_20130308b.patch: changes import.

 Move Diff and EnumCounters to util package
 --

 Key: HDFS-4574
 URL: https://issues.apache.org/jira/browse/HDFS-4574
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Attachments: a.sh, h4574_20130308b.patch, h4574_20130308.patch


 Diff and EnumCounters are two general utility classes.  It is better to put 
 them in the util package.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4566) Webdhfs token cancelation should use authentication

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597005#comment-13597005
 ] 

Hudson commented on HDFS-4566:
--

Integrated in Hadoop-Yarn-trunk #149 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/149/])
HDFS-4566. Webdhfs token cancelation should use authentication (daryn) 
(Revision 1454059)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454059
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 Webdhfs token cancelation should use authentication
 ---

 Key: HDFS-4566
 URL: https://issues.apache.org/jira/browse/HDFS-4566
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4566.patch


 Webhdfs tries to use its own implicit/internal token to cancel tokens.  
 However, like getting and renewing a token, cancel should use direct 
 authentication to ensure daemons like yarn's RM don't have problems canceling 
 tokens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4565) use DFSUtil.getSpnegoKeytabKey() to get the spnego keytab key in secondary namenode and namenode http server

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597006#comment-13597006
 ] 

Hudson commented on HDFS-4565:
--

Integrated in Hadoop-Yarn-trunk #149 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/149/])
Change the jira number of commit 1454021 to HDFS-4565. Attribute it to the 
right contributor, Arpit Gupta (Revision 1454027)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454027
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 use DFSUtil.getSpnegoKeytabKey() to get the spnego keytab key in secondary 
 namenode and namenode http server
 

 Key: HDFS-4565
 URL: https://issues.apache.org/jira/browse/HDFS-4565
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: HDFS-4565.patch


 use the method introduced by HDFS-4540 to the spengo keytab key. Better as we 
 have unit test coverage for the new method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4560) Webhdfs cannot use tokens obtained by another user

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597009#comment-13597009
 ] 

Hudson commented on HDFS-4560:
--

Integrated in Hadoop-Yarn-trunk #149 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/149/])
HDFS-4560. Webhdfs cannot use tokens obtained by another user (daryn) 
(Revision 1453955)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1453955
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 Webhdfs cannot use tokens obtained by another user
 --

 Key: HDFS-4560
 URL: https://issues.apache.org/jira/browse/HDFS-4560
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0, 0.23.7, 2.0.5-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4560.patch, HDFS-4560.patch


 Webhdfs passes {{UserParam}} even when a token is being used.  The NN will 
 construct the UGI from the token and error if the {{UserParam}} does not 
 match the token's.  This causes problems when a token for user A is used with 
 user B's context.  This is in contrast to hdfs which will honor the token's 
 ugi no matter the client's current context.  The passing of a token or user 
 params should be mutually exclusive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4569) Small image transfer related cleanups.

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597012#comment-13597012
 ] 

Hudson commented on HDFS-4569:
--

Integrated in Hadoop-Yarn-trunk #149 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/149/])
HDFS-4569. Small image transfer related cleanups. Contributed by Andrew 
Wang. (Revision 1454233)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454233
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Small image transfer related cleanups.
 --

 Key: HDFS-4569
 URL: https://issues.apache.org/jira/browse/HDFS-4569
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Fix For: 2.0.5-beta

 Attachments: hdfs-4569-1.patch, hdfs-4569-2.patch, hdfs-4569-3.patch, 
 hdfs-4569-4.patch


 The initial patch in HDFS-1490 has a couple small errors. It missed adding 
 the new configuration key dfs.image.transfer.timeout to the 
 hdfs-default.xml, and kept an explanatory comment from an earlier version of 
 the patch that is no longer correct. Also, the default timeout of 1 minute is 
 too short and can be increased.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4546) Hftp does not audit log

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597013#comment-13597013
 ] 

Hudson commented on HDFS-4546:
--

Integrated in Hadoop-Yarn-trunk #149 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/149/])
HDFS-4546. Use DFSUtil.getSpnegoKeytabKey() to get the spnego keytab key in 
secondary namenode and namenode http server. Contributed by Arpit Agarwal. 
(Revision 1454021)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454021
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java


 Hftp does not audit log
 ---

 Key: HDFS-4546
 URL: https://issues.apache.org/jira/browse/HDFS-4546
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp

 The NN servlets only log external fs operations.  External is based on 
 whether there is a {{Call}} in the {{RPC.Server}} context.  At least in the 
 case of hftp, the servlets obtain the RPC proxy and invoke methods on the 
 server-side proxy.  Since this bypasses the RPC bridge, no {{Call}} is 
 created and the fs operation is not logged.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4575) Use different timeout for failover due to NN GC pauses

2013-03-08 Thread James Kinley (JIRA)
James Kinley created HDFS-4575:
--

 Summary: Use different timeout for failover due to NN GC pauses
 Key: HDFS-4575
 URL: https://issues.apache.org/jira/browse/HDFS-4575
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: auto-failover, ha
Reporter: James Kinley


As mentioned for future work in the ZKFC design document 
(https://issues.apache.org/jira/secure/attachment/12521279/zkfc-design.pdf). It 
would be nice to use a different timeout for automatic failover due to lengthy 
NN JVM GC pauses.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4566) Webdhfs token cancelation should use authentication

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597082#comment-13597082
 ] 

Hudson commented on HDFS-4566:
--

Integrated in Hadoop-Hdfs-0.23-Build #547 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/547/])
svn merge -c 1454059 FIXES: HDFS-4566. Webdhfs token cancelation should use 
authentication (daryn) (Revision 1454066)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454066
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 Webdhfs token cancelation should use authentication
 ---

 Key: HDFS-4566
 URL: https://issues.apache.org/jira/browse/HDFS-4566
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4566.patch


 Webhdfs tries to use its own implicit/internal token to cancel tokens.  
 However, like getting and renewing a token, cancel should use direct 
 authentication to ensure daemons like yarn's RM don't have problems canceling 
 tokens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4560) Webhdfs cannot use tokens obtained by another user

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597084#comment-13597084
 ] 

Hudson commented on HDFS-4560:
--

Integrated in Hadoop-Hdfs-0.23-Build #547 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/547/])
svn merge -c 1453955 FIXES: HDFS-4560. Webhdfs cannot use tokens obtained 
by another user (daryn) (Revision 1453957)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1453957
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 Webhdfs cannot use tokens obtained by another user
 --

 Key: HDFS-4560
 URL: https://issues.apache.org/jira/browse/HDFS-4560
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0, 0.23.7, 2.0.5-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4560.patch, HDFS-4560.patch


 Webhdfs passes {{UserParam}} even when a token is being used.  The NN will 
 construct the UGI from the token and error if the {{UserParam}} does not 
 match the token's.  This causes problems when a token for user A is used with 
 user B's context.  This is in contrast to hdfs which will honor the token's 
 ugi no matter the client's current context.  The passing of a token or user 
 params should be mutually exclusive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4565) use DFSUtil.getSpnegoKeytabKey() to get the spnego keytab key in secondary namenode and namenode http server

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597091#comment-13597091
 ] 

Hudson commented on HDFS-4565:
--

Integrated in Hadoop-Hdfs-trunk #1338 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1338/])
Change the jira number of commit 1454021 to HDFS-4565. Attribute it to the 
right contributor, Arpit Gupta (Revision 1454027)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454027
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 use DFSUtil.getSpnegoKeytabKey() to get the spnego keytab key in secondary 
 namenode and namenode http server
 

 Key: HDFS-4565
 URL: https://issues.apache.org/jira/browse/HDFS-4565
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: HDFS-4565.patch


 use the method introduced by HDFS-4540 to the spengo keytab key. Better as we 
 have unit test coverage for the new method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4566) Webdhfs token cancelation should use authentication

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597090#comment-13597090
 ] 

Hudson commented on HDFS-4566:
--

Integrated in Hadoop-Hdfs-trunk #1338 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1338/])
HDFS-4566. Webdhfs token cancelation should use authentication (daryn) 
(Revision 1454059)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454059
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 Webdhfs token cancelation should use authentication
 ---

 Key: HDFS-4566
 URL: https://issues.apache.org/jira/browse/HDFS-4566
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4566.patch


 Webhdfs tries to use its own implicit/internal token to cancel tokens.  
 However, like getting and renewing a token, cancel should use direct 
 authentication to ensure daemons like yarn's RM don't have problems canceling 
 tokens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4560) Webhdfs cannot use tokens obtained by another user

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597094#comment-13597094
 ] 

Hudson commented on HDFS-4560:
--

Integrated in Hadoop-Hdfs-trunk #1338 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1338/])
HDFS-4560. Webhdfs cannot use tokens obtained by another user (daryn) 
(Revision 1453955)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1453955
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 Webhdfs cannot use tokens obtained by another user
 --

 Key: HDFS-4560
 URL: https://issues.apache.org/jira/browse/HDFS-4560
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0, 0.23.7, 2.0.5-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4560.patch, HDFS-4560.patch


 Webhdfs passes {{UserParam}} even when a token is being used.  The NN will 
 construct the UGI from the token and error if the {{UserParam}} does not 
 match the token's.  This causes problems when a token for user A is used with 
 user B's context.  This is in contrast to hdfs which will honor the token's 
 ugi no matter the client's current context.  The passing of a token or user 
 params should be mutually exclusive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4569) Small image transfer related cleanups.

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597097#comment-13597097
 ] 

Hudson commented on HDFS-4569:
--

Integrated in Hadoop-Hdfs-trunk #1338 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1338/])
HDFS-4569. Small image transfer related cleanups. Contributed by Andrew 
Wang. (Revision 1454233)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454233
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Small image transfer related cleanups.
 --

 Key: HDFS-4569
 URL: https://issues.apache.org/jira/browse/HDFS-4569
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Fix For: 2.0.5-beta

 Attachments: hdfs-4569-1.patch, hdfs-4569-2.patch, hdfs-4569-3.patch, 
 hdfs-4569-4.patch


 The initial patch in HDFS-1490 has a couple small errors. It missed adding 
 the new configuration key dfs.image.transfer.timeout to the 
 hdfs-default.xml, and kept an explanatory comment from an earlier version of 
 the patch that is no longer correct. Also, the default timeout of 1 minute is 
 too short and can be increased.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4546) Hftp does not audit log

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597098#comment-13597098
 ] 

Hudson commented on HDFS-4546:
--

Integrated in Hadoop-Hdfs-trunk #1338 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1338/])
HDFS-4546. Use DFSUtil.getSpnegoKeytabKey() to get the spnego keytab key in 
secondary namenode and namenode http server. Contributed by Arpit Agarwal. 
(Revision 1454021)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454021
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java


 Hftp does not audit log
 ---

 Key: HDFS-4546
 URL: https://issues.apache.org/jira/browse/HDFS-4546
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp

 The NN servlets only log external fs operations.  External is based on 
 whether there is a {{Call}} in the {{RPC.Server}} context.  At least in the 
 case of hftp, the servlets obtain the RPC proxy and invoke methods on the 
 server-side proxy.  Since this bypasses the RPC bridge, no {{Call}} is 
 created and the fs operation is not logged.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4557) Fix FSDirectory#delete when INode#cleanSubtree returns 0

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597109#comment-13597109
 ] 

Hudson commented on HDFS-4557:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #123 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/123/])
HDFS-4557. Fix FSDirectory#delete when INode#cleanSubtree returns 0.  
Contributed by Jing Zhao (Revision 1454138)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454138
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileUnderConstructionWithSnapshot.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeFileWithSnapshot.java


 Fix FSDirectory#delete when INode#cleanSubtree returns 0
 

 Key: HDFS-4557
 URL: https://issues.apache.org/jira/browse/HDFS-4557
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4557.000.patch, HDFS-4557.001.patch, 
 HDFS-4557.002.patch, HDFS-4557.003.patch, HDFS-4557.003.patch


 Currently INode#cleanSubtree is used to delete files/directories and collect 
 corresponding blocks for future deletion. Its return value can be 0 even if 
 file/dir has been deleted because we save snapshot copies. This breaks the 
 original logic in FSDirectory#delete since FSDirectory#delete expects a 
 positive value from a successful deletion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4566) Webdhfs token cancelation should use authentication

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597139#comment-13597139
 ] 

Hudson commented on HDFS-4566:
--

Integrated in Hadoop-Mapreduce-trunk #1366 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1366/])
HDFS-4566. Webdhfs token cancelation should use authentication (daryn) 
(Revision 1454059)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454059
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 Webdhfs token cancelation should use authentication
 ---

 Key: HDFS-4566
 URL: https://issues.apache.org/jira/browse/HDFS-4566
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4566.patch


 Webhdfs tries to use its own implicit/internal token to cancel tokens.  
 However, like getting and renewing a token, cancel should use direct 
 authentication to ensure daemons like yarn's RM don't have problems canceling 
 tokens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4565) use DFSUtil.getSpnegoKeytabKey() to get the spnego keytab key in secondary namenode and namenode http server

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597140#comment-13597140
 ] 

Hudson commented on HDFS-4565:
--

Integrated in Hadoop-Mapreduce-trunk #1366 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1366/])
Change the jira number of commit 1454021 to HDFS-4565. Attribute it to the 
right contributor, Arpit Gupta (Revision 1454027)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454027
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 use DFSUtil.getSpnegoKeytabKey() to get the spnego keytab key in secondary 
 namenode and namenode http server
 

 Key: HDFS-4565
 URL: https://issues.apache.org/jira/browse/HDFS-4565
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Arpit Gupta
Assignee: Arpit Gupta
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: HDFS-4565.patch


 use the method introduced by HDFS-4540 to the spengo keytab key. Better as we 
 have unit test coverage for the new method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4560) Webhdfs cannot use tokens obtained by another user

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597143#comment-13597143
 ] 

Hudson commented on HDFS-4560:
--

Integrated in Hadoop-Mapreduce-trunk #1366 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1366/])
HDFS-4560. Webhdfs cannot use tokens obtained by another user (daryn) 
(Revision 1453955)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1453955
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 Webhdfs cannot use tokens obtained by another user
 --

 Key: HDFS-4560
 URL: https://issues.apache.org/jira/browse/HDFS-4560
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0, 0.23.7, 2.0.5-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4560.patch, HDFS-4560.patch


 Webhdfs passes {{UserParam}} even when a token is being used.  The NN will 
 construct the UGI from the token and error if the {{UserParam}} does not 
 match the token's.  This causes problems when a token for user A is used with 
 user B's context.  This is in contrast to hdfs which will honor the token's 
 ugi no matter the client's current context.  The passing of a token or user 
 params should be mutually exclusive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4569) Small image transfer related cleanups.

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597146#comment-13597146
 ] 

Hudson commented on HDFS-4569:
--

Integrated in Hadoop-Mapreduce-trunk #1366 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1366/])
HDFS-4569. Small image transfer related cleanups. Contributed by Andrew 
Wang. (Revision 1454233)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454233
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Small image transfer related cleanups.
 --

 Key: HDFS-4569
 URL: https://issues.apache.org/jira/browse/HDFS-4569
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Fix For: 2.0.5-beta

 Attachments: hdfs-4569-1.patch, hdfs-4569-2.patch, hdfs-4569-3.patch, 
 hdfs-4569-4.patch


 The initial patch in HDFS-1490 has a couple small errors. It missed adding 
 the new configuration key dfs.image.transfer.timeout to the 
 hdfs-default.xml, and kept an explanatory comment from an earlier version of 
 the patch that is no longer correct. Also, the default timeout of 1 minute is 
 too short and can be increased.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4546) Hftp does not audit log

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597147#comment-13597147
 ] 

Hudson commented on HDFS-4546:
--

Integrated in Hadoop-Mapreduce-trunk #1366 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1366/])
HDFS-4546. Use DFSUtil.getSpnegoKeytabKey() to get the spnego keytab key in 
secondary namenode and namenode http server. Contributed by Arpit Agarwal. 
(Revision 1454021)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454021
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeHttpServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java


 Hftp does not audit log
 ---

 Key: HDFS-4546
 URL: https://issues.apache.org/jira/browse/HDFS-4546
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp

 The NN servlets only log external fs operations.  External is based on 
 whether there is a {{Call}} in the {{RPC.Server}} context.  At least in the 
 case of hftp, the servlets obtain the RPC proxy and invoke methods on the 
 server-side proxy.  Since this bypasses the RPC bridge, no {{Call}} is 
 created and the fs operation is not logged.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4571) WebHDFS should not set the service hostname on the server side

2013-03-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597163#comment-13597163
 ] 

Daryn Sharp commented on HDFS-4571:
---

+1 Looks good to me!

 WebHDFS should not set the service hostname on the server side
 --

 Key: HDFS-4571
 URL: https://issues.apache.org/jira/browse/HDFS-4571
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HDFS-4571.patch, HDFS-4571.patch


 Per discussion in HDFS-4457, the server side should never set the service of 
 a token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4567) Webhdfs does not need a token for token operations

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4567:
--

Attachment: HDFS-4567.patch
HDFS-4567.branch-23.patch

Oops, same patches with just the missing timeouts added.

 Webhdfs does not need a token for token operations
 --

 Key: HDFS-4567
 URL: https://issues.apache.org/jira/browse/HDFS-4567
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4567.branch-23.patch, HDFS-4567.branch-23.patch, 
 HDFS-4567.patch, HDFS-4567.patch


 Webhdfs will implicitly acquire a token to get/renew/cancel other tokens even 
 though it neither needs nor uses the token.  The implicit token triggers a 
 renewer thread for the fs which is undesirable for daemons such as oozie and 
 yarn's RM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4576) Webhdfs authentication issues

2013-03-08 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-4576:
-

 Summary: Webhdfs authentication issues
 Key: HDFS-4576
 URL: https://issues.apache.org/jira/browse/HDFS-4576
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp


Umbrella jira to track the webhdfs authentication issues as subtasks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4567) Webhdfs does not need a token for token operations

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4567:
--

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-4576

 Webhdfs does not need a token for token operations
 --

 Key: HDFS-4567
 URL: https://issues.apache.org/jira/browse/HDFS-4567
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4567.branch-23.patch, HDFS-4567.branch-23.patch, 
 HDFS-4567.patch, HDFS-4567.patch


 Webhdfs will implicitly acquire a token to get/renew/cancel other tokens even 
 though it neither needs nor uses the token.  The implicit token triggers a 
 renewer thread for the fs which is undesirable for daemons such as oozie and 
 yarn's RM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4548) Webhdfs doesn't renegotiate SPNEGO token

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4548:
--

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-4576

 Webhdfs doesn't renegotiate SPNEGO token
 

 Key: HDFS-4548
 URL: https://issues.apache.org/jira/browse/HDFS-4548
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-4548.branch-23.patch, HDFS-4548.branch-23.patch, 
 HDFS-4548.patch, HDFS-4548.patch


 When the webhdfs SPNEGO token expires, the fs doesn't attempt to renegotiate 
 a new SPNEGO token.  This renders webhdfs unusable for daemons that are 
 logged in via a keytab which would allow a new SPNEGO token to be generated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4542) Webhdfs doesn't support secure proxy users

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4542:
--

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-4576

 Webhdfs doesn't support secure proxy users
 --

 Key: HDFS-4542
 URL: https://issues.apache.org/jira/browse/HDFS-4542
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 3.0.0, 0.23.7, 2.0.5-beta

 Attachments: HDFS-4542.branch-23.patch, HDFS-4542.patch, 
 HDFS-4542.patch


 Webhdfs doesn't ever send the {{DoAsParam}} in the REST calls for proxy 
 users.  Proxy users on a non-secure cluster work because the server sees 
 them as the effective user, not a proxy user, which effectively bypasses the 
 proxy authorization checks.  On secure clusters, it doesn't work at all in 
 part due to wrong ugi being used for the connection (HDFS-3367), but then it 
 fails because the effective user tries to use a non-proxy token for the real 
 user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4560) Webhdfs cannot use tokens obtained by another user

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4560:
--

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-4576

 Webhdfs cannot use tokens obtained by another user
 --

 Key: HDFS-4560
 URL: https://issues.apache.org/jira/browse/HDFS-4560
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 3.0.0, 0.23.7, 2.0.5-beta
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4560.patch, HDFS-4560.patch


 Webhdfs passes {{UserParam}} even when a token is being used.  The NN will 
 construct the UGI from the token and error if the {{UserParam}} does not 
 match the token's.  This causes problems when a token for user A is used with 
 user B's context.  This is in contrast to hdfs which will honor the token's 
 ugi no matter the client's current context.  The passing of a token or user 
 params should be mutually exclusive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3367) WebHDFS doesn't use the logged in user when opening connections

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3367:
--

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-4576

 WebHDFS doesn't use the logged in user when opening connections
 ---

 Key: HDFS-3367
 URL: https://issues.apache.org/jira/browse/HDFS-3367
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 0.23.0, 1.0.2, 2.0.0-alpha, 3.0.0
Reporter: Jakob Homan
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-3367.branch-23.patch, HDFS-3367.patch, 
 HDFS-3367.patch


 Something along the lines of
 {noformat}
 UserGroupInformation.loginUserFromKeytab(blah blah)
 Filesystem fs = FileSystem.get(new URI(webhdfs://blah), conf)
 {noformat}
 doesn't work as webhdfs doesn't use the correct context and the user shows up 
 to the spnego filter without kerberos credentials:
 {noformat}Exception in thread main java.io.IOException: Authentication 
 failed, 
 url=http://NN:50070/webhdfs/v1/?op=GETDELEGATIONTOKENuser.name=USER
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHttpUrlConnection(WebHdfsFileSystem.java:337)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.httpConnect(WebHdfsFileSystem.java:347)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:403)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:675)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initDelegationToken(WebHdfsFileSystem.java:176)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initialize(WebHdfsFileSystem.java:160)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
 ...
 Caused by: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)
   at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:232)
   at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:141)
   at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHttpUrlConnection(WebHdfsFileSystem.java:332)
   ... 16 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:130)
 ...{noformat}
 Explicitly getting the current user's context via a doAs block works, but 
 this should be done by webhdfs. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4566) Webdhfs token cancelation should use authentication

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4566:
--

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-4576

 Webdhfs token cancelation should use authentication
 ---

 Key: HDFS-4566
 URL: https://issues.apache.org/jira/browse/HDFS-4566
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4566.patch


 Webhdfs tries to use its own implicit/internal token to cancel tokens.  
 However, like getting and renewing a token, cancel should use direct 
 authentication to ensure daemons like yarn's RM don't have problems canceling 
 tokens.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4564) Webhdfs returns incorrect http response codes for denied operations

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4564:
--

Issue Type: Sub-task  (was: Bug)
Parent: HDFS-4576

 Webhdfs returns incorrect http response codes for denied operations
 ---

 Key: HDFS-4564
 URL: https://issues.apache.org/jira/browse/HDFS-4564
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp

 Webhdfs is returning 401 (Unauthorized) instead of 403 (Forbidden) when it's 
 denying operations.  Examples including rejecting invalid proxy user attempts 
 and renew/cancel with an invalid user.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4577) Webhdfs operations should declare if authentication is required

2013-03-08 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-4577:
-

 Summary: Webhdfs operations should declare if authentication is 
required
 Key: HDFS-4577
 URL: https://issues.apache.org/jira/browse/HDFS-4577
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp


Multiple hardcoded conditionals for operations can be avoided in webhdfs if the 
methods declare if authentication (ie. cannot use a token) is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4567) Webhdfs does not need a token for token operations

2013-03-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597240#comment-13597240
 ] 

Hadoop QA commented on HDFS-4567:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12572769/HDFS-4567.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4059//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4059//console

This message is automatically generated.

 Webhdfs does not need a token for token operations
 --

 Key: HDFS-4567
 URL: https://issues.apache.org/jira/browse/HDFS-4567
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4567.branch-23.patch, HDFS-4567.branch-23.patch, 
 HDFS-4567.patch, HDFS-4567.patch


 Webhdfs will implicitly acquire a token to get/renew/cancel other tokens even 
 though it neither needs nor uses the token.  The implicit token triggers a 
 renewer thread for the fs which is undesirable for daemons such as oozie and 
 yarn's RM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4567) Webhdfs does not need a token for token operations

2013-03-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597279#comment-13597279
 ] 

Kihwal Lee commented on HDFS-4567:
--

The patch looks good. The change prevents unconditional pre-acquisition of 
tokens, but do acquire a token for non-token related operations. 

There is slight change in behavior. If the file system instance is created, but 
no operation is done for long enough to expire tgt in the ticket cache, it 
won't be usable any more. Previously, pre-acquired token could outlast tgt and 
allow the file system to be used. But this change of behavior will only matter 
if ticket cache is used, not keytab and a WebHdfsFileSystem instance is 
created, but unused for a long time. Since services typically use keytab, this 
will mostly be confined to user interactive sessions and scripts where users 
need to make sure a valid tgt is present and the file system is used right 
away. Overall, there should be no visible negative effect.

+1 

 Webhdfs does not need a token for token operations
 --

 Key: HDFS-4567
 URL: https://issues.apache.org/jira/browse/HDFS-4567
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4567.branch-23.patch, HDFS-4567.branch-23.patch, 
 HDFS-4567.patch, HDFS-4567.patch


 Webhdfs will implicitly acquire a token to get/renew/cancel other tokens even 
 though it neither needs nor uses the token.  The implicit token triggers a 
 renewer thread for the fs which is undesirable for daemons such as oozie and 
 yarn's RM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4567) Webhdfs does not need a token for token operations

2013-03-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4567:
-

   Resolution: Fixed
Fix Version/s: 2.0.4-alpha
   0.23.7
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-2 and branch-0.23. 

 Webhdfs does not need a token for token operations
 --

 Key: HDFS-4567
 URL: https://issues.apache.org/jira/browse/HDFS-4567
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4567.branch-23.patch, HDFS-4567.branch-23.patch, 
 HDFS-4567.patch, HDFS-4567.patch


 Webhdfs will implicitly acquire a token to get/renew/cancel other tokens even 
 though it neither needs nor uses the token.  The implicit token triggers a 
 renewer thread for the fs which is undesirable for daemons such as oozie and 
 yarn's RM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4567) Webhdfs does not need a token for token operations

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597289#comment-13597289
 ] 

Hudson commented on HDFS-4567:
--

Integrated in Hadoop-trunk-Commit #3441 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3441/])
HDFS-4567. Webhdfs does not need a token for token operations. Contributed 
by Daryn Sharp. (Revision 1454460)

 Result = SUCCESS
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454460
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 Webhdfs does not need a token for token operations
 --

 Key: HDFS-4567
 URL: https://issues.apache.org/jira/browse/HDFS-4567
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4567.branch-23.patch, HDFS-4567.branch-23.patch, 
 HDFS-4567.patch, HDFS-4567.patch


 Webhdfs will implicitly acquire a token to get/renew/cancel other tokens even 
 though it neither needs nor uses the token.  The implicit token triggers a 
 renewer thread for the fs which is undesirable for daemons such as oozie and 
 yarn's RM.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4571) WebHDFS should not set the service hostname on the server side

2013-03-08 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HDFS-4571:
-

   Resolution: Fixed
Fix Version/s: 2.0.4-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

 WebHDFS should not set the service hostname on the server side
 --

 Key: HDFS-4571
 URL: https://issues.apache.org/jira/browse/HDFS-4571
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4571.patch, HDFS-4571.patch


 Per discussion in HDFS-4457, the server side should never set the service of 
 a token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4571) WebHDFS should not set the service hostname on the server side

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597314#comment-13597314
 ] 

Hudson commented on HDFS-4571:
--

Integrated in Hadoop-trunk-Commit #3442 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3442/])
HDFS-4571. WebHDFS should not set the service hostname on the server side. 
(tucu) (Revision 1454475)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454475
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java


 WebHDFS should not set the service hostname on the server side
 --

 Key: HDFS-4571
 URL: https://issues.apache.org/jira/browse/HDFS-4571
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.4-alpha

 Attachments: HDFS-4571.patch, HDFS-4571.patch


 Per discussion in HDFS-4457, the server side should never set the service of 
 a token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4572) Fix TestJournal failures on Windows

2013-03-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4572:


Assignee: Arpit Agarwal

 Fix TestJournal failures on Windows
 ---

 Key: HDFS-4572
 URL: https://issues.apache.org/jira/browse/HDFS-4572
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4572.patch


 Multiple test failures in TestJournal. Windows is stricter about restricting 
 access to in-use files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4577) Webhdfs operations should declare if authentication is required

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4577:
--

Attachment: HDFS-4577.patch
HDFS-4577.branch-23.patch

Straightforward change that will also simplify jira for SPNEGO re-auth.

 Webhdfs operations should declare if authentication is required
 ---

 Key: HDFS-4577
 URL: https://issues.apache.org/jira/browse/HDFS-4577
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4577.branch-23.patch, HDFS-4577.patch


 Multiple hardcoded conditionals for operations can be avoided in webhdfs if 
 the methods declare if authentication (ie. cannot use a token) is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4577) Webhdfs operations should declare if authentication is required

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4577:
--

Status: Patch Available  (was: Open)

 Webhdfs operations should declare if authentication is required
 ---

 Key: HDFS-4577
 URL: https://issues.apache.org/jira/browse/HDFS-4577
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4577.branch-23.patch, HDFS-4577.patch


 Multiple hardcoded conditionals for operations can be avoided in webhdfs if 
 the methods declare if authentication (ie. cannot use a token) is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4577) Webhdfs operations should declare if authentication is required

2013-03-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597343#comment-13597343
 ] 

Kihwal Lee commented on HDFS-4577:
--

The patch looks straightforward. The individual op had no knowledge of its auth 
requirement, but now they do. This doesn't seem to decrease flexibility in any 
manner, so I think moving this check to the op level is okay.  

+1 pending successful precommit.

 Webhdfs operations should declare if authentication is required
 ---

 Key: HDFS-4577
 URL: https://issues.apache.org/jira/browse/HDFS-4577
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4577.branch-23.patch, HDFS-4577.patch


 Multiple hardcoded conditionals for operations can be avoided in webhdfs if 
 the methods declare if authentication (ie. cannot use a token) is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4572) Fix TestJournal failures on Windows

2013-03-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4572:


Attachment: HDFS-4572.patch

Thanks for the feedback Chris! Your second comment is the same as the findbug 
warning and I have fixed it.

I did not spend much time looking into the file share APIs. Changing the file 
locking behavior may be risky and would require significant testing. It would 
be a lot of effort for the minimal diagnosability benefit and only on Windows. 
Would it be acceptable to file a separate Jira to look into that?

 Fix TestJournal failures on Windows
 ---

 Key: HDFS-4572
 URL: https://issues.apache.org/jira/browse/HDFS-4572
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4572.patch, HDFS-4572.patch


 Multiple test failures in TestJournal. Windows is stricter about restricting 
 access to in-use files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4573) Fix TestINodeFile on Windows

2013-03-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597348#comment-13597348
 ] 

Arpit Agarwal commented on HDFS-4573:
-

Thanks for reviewing and testing!

 Fix TestINodeFile on Windows
 

 Key: HDFS-4573
 URL: https://issues.apache.org/jira/browse/HDFS-4573
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4573.patch


 TestINodeFile fails on Windows since individual test cases fail to shutdown 
 the MiniDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4572) Fix TestJournal failures on Windows

2013-03-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597350#comment-13597350
 ] 

Chris Nauroth commented on HDFS-4572:
-

+1 for the current patch.

Thanks for addressing the findbugs warning, and the explanation makes sense.


 Fix TestJournal failures on Windows
 ---

 Key: HDFS-4572
 URL: https://issues.apache.org/jira/browse/HDFS-4572
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4572.patch, HDFS-4572.patch


 Multiple test failures in TestJournal. Windows is stricter about restricting 
 access to in-use files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4577) Webhdfs operations should declare if authentication is required

2013-03-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597419#comment-13597419
 ] 

Hadoop QA commented on HDFS-4577:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12572787/HDFS-4577.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4060//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4060//console

This message is automatically generated.

 Webhdfs operations should declare if authentication is required
 ---

 Key: HDFS-4577
 URL: https://issues.apache.org/jira/browse/HDFS-4577
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4577.branch-23.patch, HDFS-4577.patch


 Multiple hardcoded conditionals for operations can be avoided in webhdfs if 
 the methods declare if authentication (ie. cannot use a token) is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4578) Restrict snapshot IDs to 16-bits wide

2013-03-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-4578:
---

 Summary: Restrict snapshot IDs to 16-bits wide
 Key: HDFS-4578
 URL: https://issues.apache.org/jira/browse/HDFS-4578
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)


Snapshot IDs will be restricted to 16-bits. This will allow at the most 64K 
snapshots of a given 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4579) Annotate snapshot tests

2013-03-08 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-4579:
---

 Summary: Annotate snapshot tests
 Key: HDFS-4579
 URL: https://issues.apache.org/jira/browse/HDFS-4579
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)


Add annotations to snapshot tests, required to merge into trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4572) Fix TestJournal failures on Windows

2013-03-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597446#comment-13597446
 ] 

Hadoop QA commented on HDFS-4572:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12572789/HDFS-4572.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 tests included appear to have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4061//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4061//console

This message is automatically generated.

 Fix TestJournal failures on Windows
 ---

 Key: HDFS-4572
 URL: https://issues.apache.org/jira/browse/HDFS-4572
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4572.patch, HDFS-4572.patch


 Multiple test failures in TestJournal. Windows is stricter about restricting 
 access to in-use files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4579) Annotate snapshot tests

2013-03-08 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4579:


Issue Type: Sub-task  (was: Bug)
Parent: HDFS-2802

 Annotate snapshot tests
 ---

 Key: HDFS-4579
 URL: https://issues.apache.org/jira/browse/HDFS-4579
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)


 Add annotations to snapshot tests, required to merge into trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-03-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597468#comment-13597468
 ] 

Suresh Srinivas commented on HDFS-2576:
---

A lot of discussion to catchup with. I will post a comment later when I get 
some time. Devaraj, should DistributedFileSystem include a new version of the 
create method, for HBase to use?

 Namenode should have a favored nodes hint to enable clients to have control 
 over block placement.
 -

 Key: HDFS-2576
 URL: https://issues.apache.org/jira/browse/HDFS-2576
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Pritam Damania
 Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch


 Sometimes Clients like HBase are required to dynamically compute the 
 datanodes it wishes to place the blocks for a file for higher level of 
 locality. For this purpose there is a need of a way to give the Namenode a 
 hint in terms of a favoredNodes parameter about the locations where the 
 client wants to put each block. The proposed solution is a favored nodes 
 parameter in the addBlock() method and in the create() file method to enable 
 the clients to give the hints to the NameNode about the locations of each 
 replica of the block. Note that this would be just a hint and finally the 
 NameNode would look at disk usage, datanode load etc. and decide whether it 
 can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-03-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597468#comment-13597468
 ] 

Suresh Srinivas edited comment on HDFS-2576 at 3/8/13 7:49 PM:
---

A lot of discussion to catchup with. I will post a comment later when I get 
some time. Devaraj, should DistributedFileSystem include a new version of the 
create method, for HBase to use, in trunk version of the patch?

  was (Author: sureshms):
A lot of discussion to catchup with. I will post a comment later when I get 
some time. Devaraj, should DistributedFileSystem include a new version of the 
create method, for HBase to use?
  
 Namenode should have a favored nodes hint to enable clients to have control 
 over block placement.
 -

 Key: HDFS-2576
 URL: https://issues.apache.org/jira/browse/HDFS-2576
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Pritam Damania
 Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch


 Sometimes Clients like HBase are required to dynamically compute the 
 datanodes it wishes to place the blocks for a file for higher level of 
 locality. For this purpose there is a need of a way to give the Namenode a 
 hint in terms of a favoredNodes parameter about the locations where the 
 client wants to put each block. The proposed solution is a favored nodes 
 parameter in the addBlock() method and in the create() file method to enable 
 the clients to give the hints to the NameNode about the locations of each 
 replica of the block. Note that this would be just a hint and finally the 
 NameNode would look at disk usage, datanode load etc. and decide whether it 
 can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4578) Restrict snapshot IDs to 16-bits wide

2013-03-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4578:


Issue Type: Task  (was: Bug)

 Restrict snapshot IDs to 16-bits wide
 -

 Key: HDFS-4578
 URL: https://issues.apache.org/jira/browse/HDFS-4578
 Project: Hadoop HDFS
  Issue Type: Task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)


 Snapshot IDs will be restricted to 16-bits. This will allow at the most 64K 
 snapshots of a given 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4578) Restrict snapshot IDs to 16-bits wide

2013-03-08 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4578:


Issue Type: Sub-task  (was: Task)
Parent: HDFS-2802

 Restrict snapshot IDs to 16-bits wide
 -

 Key: HDFS-4578
 URL: https://issues.apache.org/jira/browse/HDFS-4578
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)


 Snapshot IDs will be restricted to 16-bits. This will allow at the most 64K 
 snapshots of a given 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4577) Webhdfs operations should declare if authentication is required

2013-03-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4577:
-

   Resolution: Fixed
Fix Version/s: 2.0.4-alpha
   0.23.7
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-2 and branch-0.23. 

 Webhdfs operations should declare if authentication is required
 ---

 Key: HDFS-4577
 URL: https://issues.apache.org/jira/browse/HDFS-4577
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4577.branch-23.patch, HDFS-4577.patch


 Multiple hardcoded conditionals for operations can be avoided in webhdfs if 
 the methods declare if authentication (ie. cannot use a token) is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4577) Webhdfs operations should declare if authentication is required

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597507#comment-13597507
 ] 

Hudson commented on HDFS-4577:
--

Integrated in Hadoop-trunk-Commit #3443 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3443/])
HDFS-4577. Webhdfs operations should declare if authentication is required. 
Contributed by Daryn Sharp. (Revision 1454517)

 Result = SUCCESS
kihwal : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454517
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/HttpOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/PostOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/PutOpParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java


 Webhdfs operations should declare if authentication is required
 ---

 Key: HDFS-4577
 URL: https://issues.apache.org/jira/browse/HDFS-4577
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 0.23.7, 2.0.4-alpha

 Attachments: HDFS-4577.branch-23.patch, HDFS-4577.patch


 Multiple hardcoded conditionals for operations can be avoided in webhdfs if 
 the methods declare if authentication (ie. cannot use a token) is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4580) 0.95 site build failing with 'maven-project-info-reports-plugin: Could not find goal 'dependency-info''

2013-03-08 Thread stack (JIRA)
stack created HDFS-4580:
---

 Summary: 0.95 site build failing with 
'maven-project-info-reports-plugin: Could not find goal 'dependency-info''
 Key: HDFS-4580
 URL: https://issues.apache.org/jira/browse/HDFS-4580
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: stack


Our report plugin is 2.4.  Says that 'dependency-info' is new since 2.5 on the 
mvn report page:


project-info-reports:dependency-info (new in 2.5) is used to generate code 
snippets to be added to build tools.

http://maven.apache.org/plugins/maven-project-info-reports-plugin/

Let me try upgrading our reports plugin.  I tried reproducing locally running 
same mvn version but it just works for me.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-03-08 Thread Hari Mankude (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597548#comment-13597548
 ] 

Hari Mankude commented on HDFS-2576:


Is data skew going to be an issue where some DNs are overloaded vs other DNs? 
Would this an issue when there is other data stored in hdfs along with hbase?

 Namenode should have a favored nodes hint to enable clients to have control 
 over block placement.
 -

 Key: HDFS-2576
 URL: https://issues.apache.org/jira/browse/HDFS-2576
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Pritam Damania
 Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch


 Sometimes Clients like HBase are required to dynamically compute the 
 datanodes it wishes to place the blocks for a file for higher level of 
 locality. For this purpose there is a need of a way to give the Namenode a 
 hint in terms of a favoredNodes parameter about the locations where the 
 client wants to put each block. The proposed solution is a favored nodes 
 parameter in the addBlock() method and in the create() file method to enable 
 the clients to give the hints to the NameNode about the locations of each 
 replica of the block. Note that this would be just a hint and finally the 
 NameNode would look at disk usage, datanode load etc. and decide whether it 
 can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4570) Increase default dfs.image.transfer.timeout value from 1 minute

2013-03-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-4570.
---

Resolution: Duplicate

Just rolled this into HDFS-4569, dupe.

 Increase default dfs.image.transfer.timeout value from 1 minute
 ---

 Key: HDFS-4570
 URL: https://issues.apache.org/jira/browse/HDFS-4570
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor

 The default dfs.image.transfer.timeout is currently 6 (1 minute). We've 
 seen the NN / SNN hit this timeout with fsimages in the 2GB+ range. This 
 default should probably be increased.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3367) WebHDFS doesn't use the logged in user when opening connections

2013-03-08 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-3367:
--

Attachment: HDFS-3367.patch
HDFS-3367.branch-23.patch

Uses the instantiating UGI to open all connections.  Authenticated urls are 
only used for token ops, since all others are secured by a delegation token.

 WebHDFS doesn't use the logged in user when opening connections
 ---

 Key: HDFS-3367
 URL: https://issues.apache.org/jira/browse/HDFS-3367
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 0.23.0, 1.0.2, 2.0.0-alpha, 3.0.0
Reporter: Jakob Homan
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-3367.branch-23.patch, HDFS-3367.branch-23.patch, 
 HDFS-3367.patch, HDFS-3367.patch, HDFS-3367.patch


 Something along the lines of
 {noformat}
 UserGroupInformation.loginUserFromKeytab(blah blah)
 Filesystem fs = FileSystem.get(new URI(webhdfs://blah), conf)
 {noformat}
 doesn't work as webhdfs doesn't use the correct context and the user shows up 
 to the spnego filter without kerberos credentials:
 {noformat}Exception in thread main java.io.IOException: Authentication 
 failed, 
 url=http://NN:50070/webhdfs/v1/?op=GETDELEGATIONTOKENuser.name=USER
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHttpUrlConnection(WebHdfsFileSystem.java:337)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.httpConnect(WebHdfsFileSystem.java:347)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:403)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:675)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initDelegationToken(WebHdfsFileSystem.java:176)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initialize(WebHdfsFileSystem.java:160)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
 ...
 Caused by: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)
   at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:232)
   at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:141)
   at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHttpUrlConnection(WebHdfsFileSystem.java:332)
   ... 16 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:130)
 ...{noformat}
 Explicitly getting the current user's context via a doAs block works, but 
 this should be done by webhdfs. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3817) avoid printing stack information for SafeModeException

2013-03-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-3817:
-

Fix Version/s: 0.23.7

Merged to branch-0.23. The only merge conflict was from the name of rpc server 
introduced by HDFS-2481.

Is there any reason not to pull this and HADOOP-8711 to branch-2?

 avoid printing stack information for SafeModeException
 --

 Key: HDFS-3817
 URL: https://issues.apache.org/jira/browse/HDFS-3817
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 3.0.0, 0.23.7

 Attachments: HDFS-3817.patch


 When NN is in safemode, any namespace change request could cause a 
 SafeModeException to be thrown and logged in the server log, which can make 
 the server side log grow very quickly. 
 The server side log can be more concise if only the exception and error 
 message will be printed but not the stack trace.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3367) WebHDFS doesn't use the logged in user when opening connections

2013-03-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597599#comment-13597599
 ] 

Daryn Sharp commented on HDFS-3367:
---

Unfortunately I can't figure out how to write tests since this is intertwined 
with kerberos.  I did strenuously test on a secure cluster with combinations of 
normal users, proxy users, tokens, proxy tokens - both with and without a TGT 
present.

 WebHDFS doesn't use the logged in user when opening connections
 ---

 Key: HDFS-3367
 URL: https://issues.apache.org/jira/browse/HDFS-3367
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 0.23.0, 1.0.2, 2.0.0-alpha, 3.0.0
Reporter: Jakob Homan
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-3367.branch-23.patch, HDFS-3367.branch-23.patch, 
 HDFS-3367.patch, HDFS-3367.patch, HDFS-3367.patch


 Something along the lines of
 {noformat}
 UserGroupInformation.loginUserFromKeytab(blah blah)
 Filesystem fs = FileSystem.get(new URI(webhdfs://blah), conf)
 {noformat}
 doesn't work as webhdfs doesn't use the correct context and the user shows up 
 to the spnego filter without kerberos credentials:
 {noformat}Exception in thread main java.io.IOException: Authentication 
 failed, 
 url=http://NN:50070/webhdfs/v1/?op=GETDELEGATIONTOKENuser.name=USER
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHttpUrlConnection(WebHdfsFileSystem.java:337)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.httpConnect(WebHdfsFileSystem.java:347)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:403)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:675)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initDelegationToken(WebHdfsFileSystem.java:176)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initialize(WebHdfsFileSystem.java:160)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
 ...
 Caused by: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)
   at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:232)
   at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:141)
   at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHttpUrlConnection(WebHdfsFileSystem.java:332)
   ... 16 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:130)
 ...{noformat}
 Explicitly getting the current user's context via a doAs block works, but 
 this should be done by webhdfs. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks

2013-03-08 Thread Derek Dagit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Dagit updated HDFS-4366:
--

Attachment: HDFS-4366.patch

Sorry for the delay; just started looking at this again.

Attached a new patch.  One minor change to set an ArrayList capacity for 
UnderReplicatedBlocks, and 4 new tests that expose the 4 original spots I 
identified as potentially harmful.

Comments welcome.

 Block Replication Policy Implementation May Skip Higher-Priority Blocks for 
 Lower-Priority Blocks
 -

 Key: HDFS-4366
 URL: https://issues.apache.org/jira/browse/HDFS-4366
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 0.23.5
Reporter: Derek Dagit
Assignee: Derek Dagit
 Attachments: HDFS-4366.patch, HDFS-4366.patch, HDFS-4366.patch, 
 HDFS-4366.patch, HDFS-4366.patch, hdfs-4366-unittest.patch


 In certain cases, higher-priority under-replicated blocks can be skipped by 
 the replication policy implementation.  The current implementation maintains, 
 for each priority level, an index into a list of blocks that are 
 under-replicated.  Together, the lists compose a priority queue (see note 
 later about branch-0.23).  In some cases when blocks are removed from a list, 
 the caller (BlockManager) properly handles the index into the list from which 
 it removed a block.  In some other cases, the index remains stationary while 
 the list changes.  Whenever this happens, and the removed block happened to 
 be at or before the index, the implementation will skip over a block when 
 selecting blocks for replication work.
 In situations when entire racks are decommissioned, leading to many 
 under-replicated blocks, loss of blocks can occur.
 Background: HDFS-1765
 This patch to trunk greatly improved the state of the replication policy 
 implementation.  Prior to the patch, the following details were true:
   * The block priority queue was no such thing: It was really set of 
 trees that held blocks in natural ordering, that being by the blocks ID, 
 which resulted in iterator walks over the blocks in pseudo-random order.
   * There was only a single index into an iteration over all of the 
 blocks...
   * ... meaning the implementation was only successful in respecting 
 priority levels on the first pass.  Overall, the behavior was a 
 round-robin-type scheduling of blocks.
 After the patch
   * A proper priority queue is implemented, preserving log n operations 
 while iterating over blocks in the order added.
   * A separate index for each priority is key is kept...
   * ... allowing for processing of the highest priority blocks first 
 regardless of which priority had last been processed.
 The change was suggested for branch-0.23 as well as trunk, but it does not 
 appear to have been pulled in.
 The problem:
 Although the indices are now tracked in a better way, there is a 
 synchronization issue since the indices are managed outside of methods to 
 modify the contents of the queue.
 Removal of a block from a priority level without adjusting the index can mean 
 that the index then points to the block after the block it originally pointed 
 to.  In the next round of scheduling for that priority level, the block 
 originally pointed to by the index is skipped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks

2013-03-08 Thread Derek Dagit (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Derek Dagit updated HDFS-4366:
--

Status: Patch Available  (was: Open)

 Block Replication Policy Implementation May Skip Higher-Priority Blocks for 
 Lower-Priority Blocks
 -

 Key: HDFS-4366
 URL: https://issues.apache.org/jira/browse/HDFS-4366
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.23.5, 3.0.0
Reporter: Derek Dagit
Assignee: Derek Dagit
 Attachments: HDFS-4366.patch, HDFS-4366.patch, HDFS-4366.patch, 
 HDFS-4366.patch, HDFS-4366.patch, hdfs-4366-unittest.patch


 In certain cases, higher-priority under-replicated blocks can be skipped by 
 the replication policy implementation.  The current implementation maintains, 
 for each priority level, an index into a list of blocks that are 
 under-replicated.  Together, the lists compose a priority queue (see note 
 later about branch-0.23).  In some cases when blocks are removed from a list, 
 the caller (BlockManager) properly handles the index into the list from which 
 it removed a block.  In some other cases, the index remains stationary while 
 the list changes.  Whenever this happens, and the removed block happened to 
 be at or before the index, the implementation will skip over a block when 
 selecting blocks for replication work.
 In situations when entire racks are decommissioned, leading to many 
 under-replicated blocks, loss of blocks can occur.
 Background: HDFS-1765
 This patch to trunk greatly improved the state of the replication policy 
 implementation.  Prior to the patch, the following details were true:
   * The block priority queue was no such thing: It was really set of 
 trees that held blocks in natural ordering, that being by the blocks ID, 
 which resulted in iterator walks over the blocks in pseudo-random order.
   * There was only a single index into an iteration over all of the 
 blocks...
   * ... meaning the implementation was only successful in respecting 
 priority levels on the first pass.  Overall, the behavior was a 
 round-robin-type scheduling of blocks.
 After the patch
   * A proper priority queue is implemented, preserving log n operations 
 while iterating over blocks in the order added.
   * A separate index for each priority is key is kept...
   * ... allowing for processing of the highest priority blocks first 
 regardless of which priority had last been processed.
 The change was suggested for branch-0.23 as well as trunk, but it does not 
 appear to have been pulled in.
 The problem:
 Although the indices are now tracked in a better way, there is a 
 synchronization issue since the indices are managed outside of methods to 
 modify the contents of the queue.
 Removal of a block from a priority level without adjusting the index can mean 
 that the index then points to the block after the block it originally pointed 
 to.  In the next round of scheduling for that priority level, the block 
 originally pointed to by the index is skipped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4573) Fix TestINodeFile on Windows

2013-03-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597654#comment-13597654
 ] 

Suresh Srinivas commented on HDFS-4573:
---

+1 for the patch.

 Fix TestINodeFile on Windows
 

 Key: HDFS-4573
 URL: https://issues.apache.org/jira/browse/HDFS-4573
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4573.patch


 TestINodeFile fails on Windows since individual test cases fail to shutdown 
 the MiniDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4573) Fix TestINodeFile on Windows

2013-03-08 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4573:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed the patch to trunk. Thank you Arpit! Thanks to Chris for the review.

 Fix TestINodeFile on Windows
 

 Key: HDFS-4573
 URL: https://issues.apache.org/jira/browse/HDFS-4573
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4573.patch


 TestINodeFile fails on Windows since individual test cases fail to shutdown 
 the MiniDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4573) Fix TestINodeFile on Windows

2013-03-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597668#comment-13597668
 ] 

Hudson commented on HDFS-4573:
--

Integrated in Hadoop-trunk-Commit #3445 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3445/])
HDFS-4573. Fix TestINodeFile on Windows. Contributed by Arpit Agarwal. 
(Revision 1454616)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1454616
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java


 Fix TestINodeFile on Windows
 

 Key: HDFS-4573
 URL: https://issues.apache.org/jira/browse/HDFS-4573
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4573.patch


 TestINodeFile fails on Windows since individual test cases fail to shutdown 
 the MiniDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3367) WebHDFS doesn't use the logged in user when opening connections

2013-03-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597678#comment-13597678
 ] 

Hadoop QA commented on HDFS-3367:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12572836/HDFS-3367.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4062//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4062//console

This message is automatically generated.

 WebHDFS doesn't use the logged in user when opening connections
 ---

 Key: HDFS-3367
 URL: https://issues.apache.org/jira/browse/HDFS-3367
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: webhdfs
Affects Versions: 0.23.0, 1.0.2, 2.0.0-alpha, 3.0.0
Reporter: Jakob Homan
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-3367.branch-23.patch, HDFS-3367.branch-23.patch, 
 HDFS-3367.patch, HDFS-3367.patch, HDFS-3367.patch


 Something along the lines of
 {noformat}
 UserGroupInformation.loginUserFromKeytab(blah blah)
 Filesystem fs = FileSystem.get(new URI(webhdfs://blah), conf)
 {noformat}
 doesn't work as webhdfs doesn't use the correct context and the user shows up 
 to the spnego filter without kerberos credentials:
 {noformat}Exception in thread main java.io.IOException: Authentication 
 failed, 
 url=http://NN:50070/webhdfs/v1/?op=GETDELEGATIONTOKENuser.name=USER
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHttpUrlConnection(WebHdfsFileSystem.java:337)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.httpConnect(WebHdfsFileSystem.java:347)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:403)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:675)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initDelegationToken(WebHdfsFileSystem.java:176)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initialize(WebHdfsFileSystem.java:160)
   at 
 org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
 ...
 Caused by: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Failed to find 
 any Kerberos tgt)
   at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:232)
   at 
 org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:141)
   at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHttpUrlConnection(WebHdfsFileSystem.java:332)
   ... 16 more
 Caused by: GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)
   at 
 sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:130)
 ...{noformat}
 Explicitly getting the current user's context via a doAs block works, but 
 this should be done by webhdfs. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4366) Block Replication Policy Implementation May Skip Higher-Priority Blocks for Lower-Priority Blocks

2013-03-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597700#comment-13597700
 ] 

Hadoop QA commented on HDFS-4366:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12572840/HDFS-4366.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

  {color:red}-1 one of tests included doesn't have a timeout.{color}

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestReplicationPolicy

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4063//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4063//console

This message is automatically generated.

 Block Replication Policy Implementation May Skip Higher-Priority Blocks for 
 Lower-Priority Blocks
 -

 Key: HDFS-4366
 URL: https://issues.apache.org/jira/browse/HDFS-4366
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 0.23.5
Reporter: Derek Dagit
Assignee: Derek Dagit
 Attachments: HDFS-4366.patch, HDFS-4366.patch, HDFS-4366.patch, 
 HDFS-4366.patch, HDFS-4366.patch, hdfs-4366-unittest.patch


 In certain cases, higher-priority under-replicated blocks can be skipped by 
 the replication policy implementation.  The current implementation maintains, 
 for each priority level, an index into a list of blocks that are 
 under-replicated.  Together, the lists compose a priority queue (see note 
 later about branch-0.23).  In some cases when blocks are removed from a list, 
 the caller (BlockManager) properly handles the index into the list from which 
 it removed a block.  In some other cases, the index remains stationary while 
 the list changes.  Whenever this happens, and the removed block happened to 
 be at or before the index, the implementation will skip over a block when 
 selecting blocks for replication work.
 In situations when entire racks are decommissioned, leading to many 
 under-replicated blocks, loss of blocks can occur.
 Background: HDFS-1765
 This patch to trunk greatly improved the state of the replication policy 
 implementation.  Prior to the patch, the following details were true:
   * The block priority queue was no such thing: It was really set of 
 trees that held blocks in natural ordering, that being by the blocks ID, 
 which resulted in iterator walks over the blocks in pseudo-random order.
   * There was only a single index into an iteration over all of the 
 blocks...
   * ... meaning the implementation was only successful in respecting 
 priority levels on the first pass.  Overall, the behavior was a 
 round-robin-type scheduling of blocks.
 After the patch
   * A proper priority queue is implemented, preserving log n operations 
 while iterating over blocks in the order added.
   * A separate index for each priority is key is kept...
   * ... allowing for processing of the highest priority blocks first 
 regardless of which priority had last been processed.
 The change was suggested for branch-0.23 as well as trunk, but it does not 
 appear to have been pulled in.
 The problem:
 Although the indices are now tracked in a better way, there is a 
 synchronization issue since the indices are managed outside of methods to 
 modify the contents of the queue.
 Removal of a block from a priority level without adjusting the index can mean 
 that the index then points to the block after the block it originally pointed 
 to.  In the next round of scheduling for that priority level, the block 
 originally pointed to by the index is skipped.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4573) Fix TestINodeFile on Windows

2013-03-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597706#comment-13597706
 ] 

Arpit Agarwal commented on HDFS-4573:
-

Thanks Suresh!

 Fix TestINodeFile on Windows
 

 Key: HDFS-4573
 URL: https://issues.apache.org/jira/browse/HDFS-4573
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4573.patch


 TestINodeFile fails on Windows since individual test cases fail to shutdown 
 the MiniDFS cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-03-08 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HDFS-2576:
--

Attachment: hdfs-2576-trunk-2.patch

Good catch, Suresh. Here is an updated patch.

Hari, the location information is a hint to the namenode, and the namenode does 
best-effort honoring. The namenode might ignore it in case it sees those issues.

 Namenode should have a favored nodes hint to enable clients to have control 
 over block placement.
 -

 Key: HDFS-2576
 URL: https://issues.apache.org/jira/browse/HDFS-2576
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Pritam Damania
 Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch, 
 hdfs-2576-trunk-2.patch


 Sometimes Clients like HBase are required to dynamically compute the 
 datanodes it wishes to place the blocks for a file for higher level of 
 locality. For this purpose there is a need of a way to give the Namenode a 
 hint in terms of a favoredNodes parameter about the locations where the 
 client wants to put each block. The proposed solution is a favored nodes 
 parameter in the addBlock() method and in the create() file method to enable 
 the clients to give the hints to the NameNode about the locations of each 
 replica of the block. Note that this would be just a hint and finally the 
 NameNode would look at disk usage, datanode load etc. and decide whether it 
 can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4533) start-dfs.sh ignored additional parameters besides -upgrade

2013-03-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597719#comment-13597719
 ] 

Suresh Srinivas commented on HDFS-4533:
---

Instead of the while loop the following code will add the remaining options, 
right?
{code}
nameStartOpt=$nameStartOpts $@
{code}

Did you manually test this?

 start-dfs.sh ignored additional parameters besides -upgrade
 ---

 Key: HDFS-4533
 URL: https://issues.apache.org/jira/browse/HDFS-4533
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.0.3-alpha
Reporter: Fengdong Yu
  Labels: patch
 Fix For: 2.0.5-beta

 Attachments: HDFS-4533.patch


 start-dfs.sh only takes -upgrade option and ignored others. 
 So If run the following command, it will ignore the clusterId option.
 start-dfs.sh -upgrade -clusterId 1234

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4497) commons-daemon 1.0.3 dependency has bad group id causing build issues

2013-03-08 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597731#comment-13597731
 ] 

Suresh Srinivas commented on HDFS-4497:
---

[~sjlee0] if you file an ICLA, then I can add you as a contributor and assign 
this jira to you. I will also commit this patch, once ICLA is filed. Please see 
- http://www.apache.org/licenses/icla.txt.

 commons-daemon 1.0.3 dependency has bad group id causing build issues
 -

 Key: HDFS-4497
 URL: https://issues.apache.org/jira/browse/HDFS-4497
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Sangjin Lee
 Attachments: HDFS-4497.patch


 The commons-daemon dependency of the hadoop-hdfs module has been at version 
 1.0.3 for a while. However, 1.0.3 has a pretty well-known groupId error in 
 its pom (org.apache.commons as opposed to commons-daemon). This problem 
 has since been corrected on commons-daemon starting 1.0.4.
 This causes build problems for many who depend on hadoop-hdfs directly and 
 indirectly, however. Maven can skip over this metadata inconsistency. But 
 other less forgiving build systems such as ivy and gradle have much harder 
 time working around this problem. For example, in gradle, pretty much the 
 only obvious way to work around this is to override this dependency version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4497) commons-daemon 1.0.3 dependency has bad group id causing build issues

2013-03-08 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597739#comment-13597739
 ] 

Sangjin Lee commented on HDFS-4497:
---

Suresh Srinivas I am already an Apache committer (mina): 
http://people.apache.org/committer-index.html#sjlee

I also filed the ICLA a long time back before I became a committer. :)

 commons-daemon 1.0.3 dependency has bad group id causing build issues
 -

 Key: HDFS-4497
 URL: https://issues.apache.org/jira/browse/HDFS-4497
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Sangjin Lee
 Attachments: HDFS-4497.patch


 The commons-daemon dependency of the hadoop-hdfs module has been at version 
 1.0.3 for a while. However, 1.0.3 has a pretty well-known groupId error in 
 its pom (org.apache.commons as opposed to commons-daemon). This problem 
 has since been corrected on commons-daemon starting 1.0.4.
 This causes build problems for many who depend on hadoop-hdfs directly and 
 indirectly, however. Maven can skip over this metadata inconsistency. But 
 other less forgiving build systems such as ivy and gradle have much harder 
 time working around this problem. For example, in gradle, pretty much the 
 only obvious way to work around this is to override this dependency version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4579) Annotate snapshot tests

2013-03-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4579:


Attachment: HDFS-4579.patch

Patch to add annotations for new snapshot tests.

 Annotate snapshot tests
 ---

 Key: HDFS-4579
 URL: https://issues.apache.org/jira/browse/HDFS-4579
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4579.patch


 Add annotations to snapshot tests, required to merge into trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4572) Fix TestJournal failures on Windows

2013-03-08 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597813#comment-13597813
 ] 

Ivan Mitic commented on HDFS-4572:
--

Patch looks good, +1

One question though, I see that you're adding a timeout to every test case in 
the test. Is this a new guideline? I found it hard to debug tests like these 
and always ended up removing the timeout what seemed a bit odd. Would it make 
more sense to have a test wide 15 minute timeout or something? Ideally, the 
timeout would be configured from the outside and not hit when debugging from 
eclipse.


 Fix TestJournal failures on Windows
 ---

 Key: HDFS-4572
 URL: https://issues.apache.org/jira/browse/HDFS-4572
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4572.patch, HDFS-4572.patch


 Multiple test failures in TestJournal. Windows is stricter about restricting 
 access to in-use files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3981) access time is set without holding writelock in FSNamesystem

2013-03-08 Thread Xiaobo Peng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobo Peng updated HDFS-3981:
--

Attachment: HDFS-3981-trunk.patch
HDFS-3981-branch-2.patch
HDFS-3981-branch-0.23.patch

 access time is set without holding writelock in FSNamesystem
 

 Key: HDFS-3981
 URL: https://issues.apache.org/jira/browse/HDFS-3981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.3
Reporter: Xiaobo Peng
Assignee: Xiaobo Peng
Priority: Minor
 Attachments: HDFS-3981-branch-0.23.4.patch, 
 HDFS-3981-branch-0.23.patch, HDFS-3981-branch-2.patch, HDFS-3981-trunk.patch


 Incorrect condition in {{FSNamesystem.getBlockLocatoins()}} can lead to 
 updating times without write lock. In most cases this condition will force 
 {{FSNamesystem.getBlockLocatoins()}} to hold write lock, even if times do not 
 need to be updated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3981) access time is set without holding writelock in FSNamesystem

2013-03-08 Thread Xiaobo Peng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597834#comment-13597834
 ] 

Xiaobo Peng commented on HDFS-3981:
---

Attached the patches for trunk, branch-2 and 0.23. 

Note: my previous comment Also, seems we need to release readlock before 
trying to acquire writelock... is invalid.

 access time is set without holding writelock in FSNamesystem
 

 Key: HDFS-3981
 URL: https://issues.apache.org/jira/browse/HDFS-3981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.3
Reporter: Xiaobo Peng
Assignee: Xiaobo Peng
Priority: Minor
 Attachments: HDFS-3981-branch-0.23.4.patch, 
 HDFS-3981-branch-0.23.patch, HDFS-3981-branch-2.patch, HDFS-3981-trunk.patch


 Incorrect condition in {{FSNamesystem.getBlockLocatoins()}} can lead to 
 updating times without write lock. In most cases this condition will force 
 {{FSNamesystem.getBlockLocatoins()}} to hold write lock, even if times do not 
 need to be updated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3981) access time is set without holding writelock in FSNamesystem

2013-03-08 Thread Xiaobo Peng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobo Peng updated HDFS-3981:
--

Affects Version/s: 0.23.5
   Status: Patch Available  (was: Open)

 access time is set without holding writelock in FSNamesystem
 

 Key: HDFS-3981
 URL: https://issues.apache.org/jira/browse/HDFS-3981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.5, 0.23.3
Reporter: Xiaobo Peng
Assignee: Xiaobo Peng
Priority: Minor
 Attachments: HDFS-3981-branch-0.23.4.patch, 
 HDFS-3981-branch-0.23.patch, HDFS-3981-branch-2.patch, HDFS-3981-trunk.patch


 Incorrect condition in {{FSNamesystem.getBlockLocatoins()}} can lead to 
 updating times without write lock. In most cases this condition will force 
 {{FSNamesystem.getBlockLocatoins()}} to hold write lock, even if times do not 
 need to be updated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4572) Fix TestJournal failures on Windows

2013-03-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597847#comment-13597847
 ] 

Arpit Agarwal commented on HDFS-4572:
-

Thanks for reviewing! 

Ivan, adding timeout annotations for each new or changed test case is a 
requirement in trunk now.

 Fix TestJournal failures on Windows
 ---

 Key: HDFS-4572
 URL: https://issues.apache.org/jira/browse/HDFS-4572
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4572.patch, HDFS-4572.patch


 Multiple test failures in TestJournal. Windows is stricter about restricting 
 access to in-use files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3981) access time is set without holding writelock in FSNamesystem

2013-03-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597850#comment-13597850
 ] 

Hadoop QA commented on HDFS-3981:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12572891/HDFS-3981-trunk.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4064//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4064//console

This message is automatically generated.

 access time is set without holding writelock in FSNamesystem
 

 Key: HDFS-3981
 URL: https://issues.apache.org/jira/browse/HDFS-3981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.3, 0.23.5
Reporter: Xiaobo Peng
Assignee: Xiaobo Peng
Priority: Minor
 Attachments: HDFS-3981-branch-0.23.4.patch, 
 HDFS-3981-branch-0.23.patch, HDFS-3981-branch-2.patch, HDFS-3981-trunk.patch


 Incorrect condition in {{FSNamesystem.getBlockLocatoins()}} can lead to 
 updating times without write lock. In most cases this condition will force 
 {{FSNamesystem.getBlockLocatoins()}} to hold write lock, even if times do not 
 need to be updated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2350) Secure DN doesn't print output to console when started interactively

2013-03-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13597855#comment-13597855
 ] 

Chris Nauroth commented on HDFS-2350:
-

This has been resolved by the patch committed for HDFS-4519.

 Secure DN doesn't print output to console when started interactively
 

 Key: HDFS-2350
 URL: https://issues.apache.org/jira/browse/HDFS-2350
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Chris Nauroth
 Fix For: 0.24.0


 If one starts a secure DN (using jsvc) interactively, the output is not 
 printed to the console, but instead ends up in {{$HADOOP_LOG_DIR/jsvc.err}} 
 and {{$HADOOP_LOG_DIR/jsvc.out}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-2350) Secure DN doesn't print output to console when started interactively

2013-03-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-2350.
-

Resolution: Fixed

 Secure DN doesn't print output to console when started interactively
 

 Key: HDFS-2350
 URL: https://issues.apache.org/jira/browse/HDFS-2350
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 0.24.0
Reporter: Aaron T. Myers
Assignee: Chris Nauroth
 Fix For: 0.24.0


 If one starts a secure DN (using jsvc) interactively, the output is not 
 printed to the console, but instead ends up in {{$HADOOP_LOG_DIR/jsvc.err}} 
 and {{$HADOOP_LOG_DIR/jsvc.out}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira