[jira] [Commented] (HDFS-3507) DFS#isInSafeMode needs to execute only on Active NameNode

2012-10-31 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487583#comment-13487583
 ] 

Aaron T. Myers commented on HDFS-3507:
--

Hi Vinay, the config dfs.ha.allow.stale.reads is only used for tests. As 
such, I think it's OK to label these operations as I previously suggested.

 DFS#isInSafeMode needs to execute only on Active NameNode
 -

 Key: HDFS-3507
 URL: https://issues.apache.org/jira/browse/HDFS-3507
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Vinay
Assignee: Vinay
Priority: Critical
 Attachments: HDFS-3507.patch, HDFS-3507.patch


 Currently DFS#isInSafeMode is not Checking for the NN state. It can be 
 executed on any of the NNs.
 But HBase will use this API to check for the NN safemode before starting up 
 its service.
 If first NN configured is in standby then DFS#isInSafeMode will check standby 
 NNs safemode but hbase want state of Active NN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4047) BPServiceActor has nested shouldRun loops

2012-10-31 Thread Yanbo Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanbo Liang updated HDFS-4047:
--

Attachment: HADOOP-4047.patch

Just like the description, we hoist the info log from offerService() out to 
run() and remove the while loop in offerservice(). The patch looks like huge 
change due to the format of code.

 BPServiceActor has nested shouldRun loops
 -

 Key: HDFS-4047
 URL: https://issues.apache.org/jira/browse/HDFS-4047
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Minor
 Attachments: HADOOP-4047.patch


 BPServiceActor#run and offerService booth have while shouldRun loops. We only 
 need the outer one, ie we can hoist the info log from offerService out to run 
 and remove the while loop.
 {code}
 BPServiceActor#run:
 while (shouldRun()) {
   try {
 offerService();
   } catch (Exception ex) {
 ...
 offerService:
 while (shouldRun()) {
   try {
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4047) BPServiceActor has nested shouldRun loops

2012-10-31 Thread Yanbo Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanbo Liang updated HDFS-4047:
--

Status: Patch Available  (was: Open)

 BPServiceActor has nested shouldRun loops
 -

 Key: HDFS-4047
 URL: https://issues.apache.org/jira/browse/HDFS-4047
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Minor
 Attachments: HADOOP-4047.patch


 BPServiceActor#run and offerService booth have while shouldRun loops. We only 
 need the outer one, ie we can hoist the info log from offerService out to run 
 and remove the while loop.
 {code}
 BPServiceActor#run:
 while (shouldRun()) {
   try {
 offerService();
   } catch (Exception ex) {
 ...
 offerService:
 while (shouldRun()) {
   try {
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4129) Add utility methods to dump NameNode in memory tree for testing

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487590#comment-13487590
 ] 

Hudson commented on HDFS-4129:
--

Integrated in Hadoop-trunk-Commit #2945 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2945/])
HDFS-4129. Add utility methods to dump NameNode in memory tree for testing. 
Contributed by Tsz Wo (Nicholas), SZE. (Revision 1403956)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403956
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsLimits.java


 Add utility methods to dump NameNode in memory tree for testing
 ---

 Key: HDFS-4129
 URL: https://issues.apache.org/jira/browse/HDFS-4129
 Project: Hadoop HDFS
  Issue Type: Test
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 3.0.0

 Attachments: h4129_20121029b.patch, h4129_20121029.patch, 
 h4129_20121030.patch


 The output of the utility methods looks like below.
 {noformat}
 \- foo   (INodeDirectory)
   \- sub1   (INodeDirectory)
 +- file1   (INodeFile)
 +- file2   (INodeFile)
 +- sub11   (INodeDirectory)
   \- file3   (INodeFile)
 \- z_file4   (INodeFile)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3916) libwebhdfs (C client) code cleanups

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487591#comment-13487591
 ] 

Hudson commented on HDFS-3916:
--

Integrated in Hadoop-trunk-Commit #2945 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/2945/])
HDFS-3916. libwebhdfs testing code cleanup. Contributed by Jing Zhao. 
(Revision 1403922)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403922
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_multi_write.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_ops.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_read.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_threaded.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_write.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_read_bm.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.h


 libwebhdfs (C client) code cleanups
 ---

 Key: HDFS-3916
 URL: https://issues.apache.org/jira/browse/HDFS-3916
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.2-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.0.3-alpha

 Attachments: 0002-fix.patch, HDFS-3916.003.patch, 
 HDFS-3916.004.patch, HDFS-3916.005.patch, HDFS-3916.006.patch


 Code cleanups in libwebhdfs.
 * don't duplicate exception.c, exception.h, expect.h, jni_helper.c.  We have 
 one copy of these files; we don't need 2.
 * remember to set errno in all public library functions (this is part of the 
 API)
 * fix undefined symbols (if a function is not implemented, it should return 
 ENOTSUP, but still exist)
 * don't expose private data structures in the (end-user visible) public 
 headers
 * can't re-use hdfsBuilder as hdfsFS, because the strings in hdfsBuilder are 
 not dynamically allocated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4047) BPServiceActor has nested shouldRun loops

2012-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487614#comment-13487614
 ] 

Hadoop QA commented on HDFS-4047:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12551496/HADOOP-4047.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3432//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3432//console

This message is automatically generated.

 BPServiceActor has nested shouldRun loops
 -

 Key: HDFS-4047
 URL: https://issues.apache.org/jira/browse/HDFS-4047
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Priority: Minor
 Attachments: HADOOP-4047.patch


 BPServiceActor#run and offerService booth have while shouldRun loops. We only 
 need the outer one, ie we can hoist the info log from offerService out to run 
 and remove the while loop.
 {code}
 BPServiceActor#run:
 while (shouldRun()) {
   try {
 offerService();
   } catch (Exception ex) {
 ...
 offerService:
 while (shouldRun()) {
   try {
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4129) Add utility methods to dump NameNode in memory tree for testing

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487666#comment-13487666
 ] 

Hudson commented on HDFS-4129:
--

Integrated in Hadoop-Yarn-trunk #22 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/22/])
HDFS-4129. Add utility methods to dump NameNode in memory tree for testing. 
Contributed by Tsz Wo (Nicholas), SZE. (Revision 1403956)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403956
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsLimits.java


 Add utility methods to dump NameNode in memory tree for testing
 ---

 Key: HDFS-4129
 URL: https://issues.apache.org/jira/browse/HDFS-4129
 Project: Hadoop HDFS
  Issue Type: Test
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 3.0.0

 Attachments: h4129_20121029b.patch, h4129_20121029.patch, 
 h4129_20121030.patch


 The output of the utility methods looks like below.
 {noformat}
 \- foo   (INodeDirectory)
   \- sub1   (INodeDirectory)
 +- file1   (INodeFile)
 +- file2   (INodeFile)
 +- sub11   (INodeDirectory)
   \- file3   (INodeFile)
 \- z_file4   (INodeFile)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3573) Supply NamespaceInfo when instantiating JournalManagers

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487668#comment-13487668
 ] 

Hudson commented on HDFS-3573:
--

Integrated in Hadoop-Yarn-trunk #22 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/22/])
Moved HDFS-3573 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403740)

 Result = SUCCESS
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403740
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Supply NamespaceInfo when instantiating JournalManagers
 ---

 Key: HDFS-3573
 URL: https://issues.apache.org/jira/browse/HDFS-3573
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 0001-HDFS-3573-for-branch-2.patch, hdfs-3573.txt, 
 hdfs-3573.txt, hdfs-3573.txt, hdfs-3573.txt


 Currently, the JournalManagers are instantiated before the NamespaceInfo is 
 loaded from local storage directories. This is problematic since the JM may 
 want to verify that the storage info associated with the journal matches the 
 NN which is starting up (eg to prevent an operator accidentally configuring 
 two clusters against the same remote journal storage). This JIRA rejiggers 
 the initialization sequence so that the JMs receive NamespaceInfo as a 
 constructor argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3789) JournalManager#format() should be able to throw IOException

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487669#comment-13487669
 ] 

Hudson commented on HDFS-3789:
--

Integrated in Hadoop-Yarn-trunk #22 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/22/])
Moved HDFS-3789 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403765)

 Result = SUCCESS
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403765
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 JournalManager#format() should be able to throw IOException
 ---

 Key: HDFS-3789
 URL: https://issues.apache.org/jira/browse/HDFS-3789
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 3.0.0
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 0003-HDFS-3789-for-branch-2.patch, HDFS-3789.diff


 Currently JournalManager#format cannot throw any exception. As format can 
 fail, we should be able to propogate this failure upwards. Otherwise, format 
 will fail silently, and the admin will start using the cluster with a 
 failed/unusable journal manager.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3916) libwebhdfs (C client) code cleanups

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487671#comment-13487671
 ] 

Hudson commented on HDFS-3916:
--

Integrated in Hadoop-Yarn-trunk #22 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/22/])
HDFS-3916. libwebhdfs testing code cleanup. Contributed by Jing Zhao. 
(Revision 1403922)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403922
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_multi_write.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_ops.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_read.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_threaded.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_write.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_read_bm.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.h


 libwebhdfs (C client) code cleanups
 ---

 Key: HDFS-3916
 URL: https://issues.apache.org/jira/browse/HDFS-3916
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.2-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.0.3-alpha

 Attachments: 0002-fix.patch, HDFS-3916.003.patch, 
 HDFS-3916.004.patch, HDFS-3916.005.patch, HDFS-3916.006.patch


 Code cleanups in libwebhdfs.
 * don't duplicate exception.c, exception.h, expect.h, jni_helper.c.  We have 
 one copy of these files; we don't need 2.
 * remember to set errno in all public library functions (this is part of the 
 API)
 * fix undefined symbols (if a function is not implemented, it should return 
 ENOTSUP, but still exist)
 * don't expose private data structures in the (end-user visible) public 
 headers
 * can't re-use hdfsBuilder as hdfsFS, because the strings in hdfsBuilder are 
 not dynamically allocated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3695) Genericize format() to non-file JournalManagers

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487672#comment-13487672
 ] 

Hudson commented on HDFS-3695:
--

Integrated in Hadoop-Yarn-trunk #22 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/22/])
Moved HDFS-3695 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403748)

 Result = SUCCESS
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403748
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Genericize format() to non-file JournalManagers
 ---

 Key: HDFS-3695
 URL: https://issues.apache.org/jira/browse/HDFS-3695
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: QuorumJournalManager (HDFS-3077)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 0002-HDFS-3695-for-branch-2.patch, hdfs-3695.txt, 
 hdfs-3695.txt, hdfs-3695.txt


 Currently, the namenode -format and namenode -initializeSharedEdits 
 commands do not understand how to do anything with non-file-based shared 
 storage. This affects both BookKeeperJournalManager and QuorumJournalManager.
 This JIRA is to plumb through the formatting of edits directories using 
 pluggable journal manager implementations so that no separate step needs to 
 be taken to format them -- the same commands will work for NFS-based storage 
 or one of the alternate implementations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3809) Make BKJM use protobufs for all serialization with ZK

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487673#comment-13487673
 ] 

Hudson commented on HDFS-3809:
--

Integrated in Hadoop-Yarn-trunk #22 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/22/])
Moved HDFS-3809 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403769)

 Result = SUCCESS
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403769
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make BKJM use protobufs for all serialization with ZK
 -

 Key: HDFS-3809
 URL: https://issues.apache.org/jira/browse/HDFS-3809
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 
 0001-HDFS-3809.-Make-BKJM-use-protobufs-for-all-serializa.patch, 
 0001-HDFS-3809.-Make-BKJM-use-protobufs-for-all-serializa.patch, 
 0004-HDFS-3809-for-branch-2.patch, HDFS-3809.diff, HDFS-3809.diff, 
 HDFS-3809.diff


 HDFS uses protobufs for serialization in many places. Protobufs allow fields 
 to be added without breaking bc or requiring new parsing code to be written. 
 For this reason, we should use them in BKJM also.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4129) Add utility methods to dump NameNode in memory tree for testing

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487722#comment-13487722
 ] 

Hudson commented on HDFS-4129:
--

Integrated in Hadoop-Hdfs-trunk #1212 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1212/])
HDFS-4129. Add utility methods to dump NameNode in memory tree for testing. 
Contributed by Tsz Wo (Nicholas), SZE. (Revision 1403956)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403956
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsLimits.java


 Add utility methods to dump NameNode in memory tree for testing
 ---

 Key: HDFS-4129
 URL: https://issues.apache.org/jira/browse/HDFS-4129
 Project: Hadoop HDFS
  Issue Type: Test
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 3.0.0

 Attachments: h4129_20121029b.patch, h4129_20121029.patch, 
 h4129_20121030.patch


 The output of the utility methods looks like below.
 {noformat}
 \- foo   (INodeDirectory)
   \- sub1   (INodeDirectory)
 +- file1   (INodeFile)
 +- file2   (INodeFile)
 +- sub11   (INodeDirectory)
   \- file3   (INodeFile)
 \- z_file4   (INodeFile)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3789) JournalManager#format() should be able to throw IOException

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487725#comment-13487725
 ] 

Hudson commented on HDFS-3789:
--

Integrated in Hadoop-Hdfs-trunk #1212 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1212/])
Moved HDFS-3789 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403765)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403765
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 JournalManager#format() should be able to throw IOException
 ---

 Key: HDFS-3789
 URL: https://issues.apache.org/jira/browse/HDFS-3789
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 3.0.0
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 0003-HDFS-3789-for-branch-2.patch, HDFS-3789.diff


 Currently JournalManager#format cannot throw any exception. As format can 
 fail, we should be able to propogate this failure upwards. Otherwise, format 
 will fail silently, and the admin will start using the cluster with a 
 failed/unusable journal manager.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3573) Supply NamespaceInfo when instantiating JournalManagers

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487724#comment-13487724
 ] 

Hudson commented on HDFS-3573:
--

Integrated in Hadoop-Hdfs-trunk #1212 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1212/])
Moved HDFS-3573 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403740)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403740
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Supply NamespaceInfo when instantiating JournalManagers
 ---

 Key: HDFS-3573
 URL: https://issues.apache.org/jira/browse/HDFS-3573
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 0001-HDFS-3573-for-branch-2.patch, hdfs-3573.txt, 
 hdfs-3573.txt, hdfs-3573.txt, hdfs-3573.txt


 Currently, the JournalManagers are instantiated before the NamespaceInfo is 
 loaded from local storage directories. This is problematic since the JM may 
 want to verify that the storage info associated with the journal matches the 
 NN which is starting up (eg to prevent an operator accidentally configuring 
 two clusters against the same remote journal storage). This JIRA rejiggers 
 the initialization sequence so that the JMs receive NamespaceInfo as a 
 constructor argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3916) libwebhdfs (C client) code cleanups

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487727#comment-13487727
 ] 

Hudson commented on HDFS-3916:
--

Integrated in Hadoop-Hdfs-trunk #1212 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1212/])
HDFS-3916. libwebhdfs testing code cleanup. Contributed by Jing Zhao. 
(Revision 1403922)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403922
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_multi_write.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_ops.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_read.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_threaded.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_write.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_read_bm.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.h


 libwebhdfs (C client) code cleanups
 ---

 Key: HDFS-3916
 URL: https://issues.apache.org/jira/browse/HDFS-3916
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.2-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.0.3-alpha

 Attachments: 0002-fix.patch, HDFS-3916.003.patch, 
 HDFS-3916.004.patch, HDFS-3916.005.patch, HDFS-3916.006.patch


 Code cleanups in libwebhdfs.
 * don't duplicate exception.c, exception.h, expect.h, jni_helper.c.  We have 
 one copy of these files; we don't need 2.
 * remember to set errno in all public library functions (this is part of the 
 API)
 * fix undefined symbols (if a function is not implemented, it should return 
 ENOTSUP, but still exist)
 * don't expose private data structures in the (end-user visible) public 
 headers
 * can't re-use hdfsBuilder as hdfsFS, because the strings in hdfsBuilder are 
 not dynamically allocated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3695) Genericize format() to non-file JournalManagers

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487728#comment-13487728
 ] 

Hudson commented on HDFS-3695:
--

Integrated in Hadoop-Hdfs-trunk #1212 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1212/])
Moved HDFS-3695 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403748)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403748
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Genericize format() to non-file JournalManagers
 ---

 Key: HDFS-3695
 URL: https://issues.apache.org/jira/browse/HDFS-3695
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: QuorumJournalManager (HDFS-3077)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 0002-HDFS-3695-for-branch-2.patch, hdfs-3695.txt, 
 hdfs-3695.txt, hdfs-3695.txt


 Currently, the namenode -format and namenode -initializeSharedEdits 
 commands do not understand how to do anything with non-file-based shared 
 storage. This affects both BookKeeperJournalManager and QuorumJournalManager.
 This JIRA is to plumb through the formatting of edits directories using 
 pluggable journal manager implementations so that no separate step needs to 
 be taken to format them -- the same commands will work for NFS-based storage 
 or one of the alternate implementations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3809) Make BKJM use protobufs for all serialization with ZK

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487729#comment-13487729
 ] 

Hudson commented on HDFS-3809:
--

Integrated in Hadoop-Hdfs-trunk #1212 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1212/])
Moved HDFS-3809 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403769)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403769
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make BKJM use protobufs for all serialization with ZK
 -

 Key: HDFS-3809
 URL: https://issues.apache.org/jira/browse/HDFS-3809
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 
 0001-HDFS-3809.-Make-BKJM-use-protobufs-for-all-serializa.patch, 
 0001-HDFS-3809.-Make-BKJM-use-protobufs-for-all-serializa.patch, 
 0004-HDFS-3809-for-branch-2.patch, HDFS-3809.diff, HDFS-3809.diff, 
 HDFS-3809.diff


 HDFS uses protobufs for serialization in many places. Protobufs allow fields 
 to be added without breaking bc or requiring new parsing code to be written. 
 For this reason, we should use them in BKJM also.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3804) TestHftpFileSystem fails intermittently with JDK7

2012-10-31 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487767#comment-13487767
 ] 

Daryn Sharp commented on HDFS-3804:
---

+1 For consistency, do you think {{hftpFs}} should also be non-static?

 TestHftpFileSystem fails intermittently with JDK7
 -

 Key: HDFS-3804
 URL: https://issues.apache.org/jira/browse/HDFS-3804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HDFS-3804-2.patch, HDFS-3804-3.patch, HDFS-3804.patch, 
 HDFS-3804.patch


 For example:
   testFileNameEncoding(org.apache.hadoop.hdfs.TestHftpFileSystem): Filesystem 
 closed
   testDataNodeRedirect(org.apache.hadoop.hdfs.TestHftpFileSystem): Filesystem 
 closed
 This test case sets up a filesystem that is used by the first half of the 
 test methods (in declaration order), but the second half of the tests start 
 by calling {{FileSystem.closeAll}}. With JDK7, test methods are run in an 
 arbitrary order, so if any first half methods run after any second half 
 methods, they fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4129) Add utility methods to dump NameNode in memory tree for testing

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487787#comment-13487787
 ] 

Hudson commented on HDFS-4129:
--

Integrated in Hadoop-Mapreduce-trunk #1242 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1242/])
HDFS-4129. Add utility methods to dump NameNode in memory tree for testing. 
Contributed by Tsz Wo (Nicholas), SZE. (Revision 1403956)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403956
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsLimits.java


 Add utility methods to dump NameNode in memory tree for testing
 ---

 Key: HDFS-4129
 URL: https://issues.apache.org/jira/browse/HDFS-4129
 Project: Hadoop HDFS
  Issue Type: Test
  Components: name-node
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 3.0.0

 Attachments: h4129_20121029b.patch, h4129_20121029.patch, 
 h4129_20121030.patch


 The output of the utility methods looks like below.
 {noformat}
 \- foo   (INodeDirectory)
   \- sub1   (INodeDirectory)
 +- file1   (INodeFile)
 +- file2   (INodeFile)
 +- sub11   (INodeDirectory)
   \- file3   (INodeFile)
 \- z_file4   (INodeFile)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3789) JournalManager#format() should be able to throw IOException

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487790#comment-13487790
 ] 

Hudson commented on HDFS-3789:
--

Integrated in Hadoop-Mapreduce-trunk #1242 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1242/])
Moved HDFS-3789 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403765)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403765
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 JournalManager#format() should be able to throw IOException
 ---

 Key: HDFS-3789
 URL: https://issues.apache.org/jira/browse/HDFS-3789
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: 3.0.0
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 0003-HDFS-3789-for-branch-2.patch, HDFS-3789.diff


 Currently JournalManager#format cannot throw any exception. As format can 
 fail, we should be able to propogate this failure upwards. Otherwise, format 
 will fail silently, and the admin will start using the cluster with a 
 failed/unusable journal manager.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3573) Supply NamespaceInfo when instantiating JournalManagers

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487789#comment-13487789
 ] 

Hudson commented on HDFS-3573:
--

Integrated in Hadoop-Mapreduce-trunk #1242 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1242/])
Moved HDFS-3573 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403740)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403740
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Supply NamespaceInfo when instantiating JournalManagers
 ---

 Key: HDFS-3573
 URL: https://issues.apache.org/jira/browse/HDFS-3573
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 3.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 0001-HDFS-3573-for-branch-2.patch, hdfs-3573.txt, 
 hdfs-3573.txt, hdfs-3573.txt, hdfs-3573.txt


 Currently, the JournalManagers are instantiated before the NamespaceInfo is 
 loaded from local storage directories. This is problematic since the JM may 
 want to verify that the storage info associated with the journal matches the 
 NN which is starting up (eg to prevent an operator accidentally configuring 
 two clusters against the same remote journal storage). This JIRA rejiggers 
 the initialization sequence so that the JMs receive NamespaceInfo as a 
 constructor argument.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3916) libwebhdfs (C client) code cleanups

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487792#comment-13487792
 ] 

Hudson commented on HDFS-3916:
--

Integrated in Hadoop-Mapreduce-trunk #1242 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1242/])
HDFS-3916. libwebhdfs testing code cleanup. Contributed by Jing Zhao. 
(Revision 1403922)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403922
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_multi_write.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_ops.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_read.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_threaded.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_libwebhdfs_write.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/libwebhdfs/src/test_read_bm.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.c
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/native_mini_dfs.h


 libwebhdfs (C client) code cleanups
 ---

 Key: HDFS-3916
 URL: https://issues.apache.org/jira/browse/HDFS-3916
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.2-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.0.3-alpha

 Attachments: 0002-fix.patch, HDFS-3916.003.patch, 
 HDFS-3916.004.patch, HDFS-3916.005.patch, HDFS-3916.006.patch


 Code cleanups in libwebhdfs.
 * don't duplicate exception.c, exception.h, expect.h, jni_helper.c.  We have 
 one copy of these files; we don't need 2.
 * remember to set errno in all public library functions (this is part of the 
 API)
 * fix undefined symbols (if a function is not implemented, it should return 
 ENOTSUP, but still exist)
 * don't expose private data structures in the (end-user visible) public 
 headers
 * can't re-use hdfsBuilder as hdfsFS, because the strings in hdfsBuilder are 
 not dynamically allocated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3695) Genericize format() to non-file JournalManagers

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487793#comment-13487793
 ] 

Hudson commented on HDFS-3695:
--

Integrated in Hadoop-Mapreduce-trunk #1242 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1242/])
Moved HDFS-3695 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403748)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403748
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Genericize format() to non-file JournalManagers
 ---

 Key: HDFS-3695
 URL: https://issues.apache.org/jira/browse/HDFS-3695
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ha, name-node
Affects Versions: QuorumJournalManager (HDFS-3077)
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 0002-HDFS-3695-for-branch-2.patch, hdfs-3695.txt, 
 hdfs-3695.txt, hdfs-3695.txt


 Currently, the namenode -format and namenode -initializeSharedEdits 
 commands do not understand how to do anything with non-file-based shared 
 storage. This affects both BookKeeperJournalManager and QuorumJournalManager.
 This JIRA is to plumb through the formatting of edits directories using 
 pluggable journal manager implementations so that no separate step needs to 
 be taken to format them -- the same commands will work for NFS-based storage 
 or one of the alternate implementations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3809) Make BKJM use protobufs for all serialization with ZK

2012-10-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487794#comment-13487794
 ] 

Hudson commented on HDFS-3809:
--

Integrated in Hadoop-Mapreduce-trunk #1242 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1242/])
Moved HDFS-3809 entry in CHANGES.txt from trunk to 2.0.3-alpha section 
(Revision 1403769)

 Result = FAILURE
umamahesh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1403769
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Make BKJM use protobufs for all serialization with ZK
 -

 Key: HDFS-3809
 URL: https://issues.apache.org/jira/browse/HDFS-3809
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: name-node
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Ivan Kelly
Assignee: Ivan Kelly
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: 
 0001-HDFS-3809.-Make-BKJM-use-protobufs-for-all-serializa.patch, 
 0001-HDFS-3809.-Make-BKJM-use-protobufs-for-all-serializa.patch, 
 0004-HDFS-3809-for-branch-2.patch, HDFS-3809.diff, HDFS-3809.diff, 
 HDFS-3809.diff


 HDFS uses protobufs for serialization in many places. Protobufs allow fields 
 to be added without breaking bc or requiring new parsing code to be written. 
 For this reason, we should use them in BKJM also.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4080) Add an option to disable block-level state change logging

2012-10-31 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4080:
-

Attachment: hdfs-4080.patch

The new patch addresses the review comment. The scope of the change is limited 
to the block state changes= messages from BlockManager  friends and the calls 
through DatanodeProtocol.

 Add an option to disable block-level state change logging
 -

 Key: HDFS-4080
 URL: https://issues.apache.org/jira/browse/HDFS-4080
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hdfs-4080.1.patch, hdfs-4080.patch, hdfs-4080.patch


 Although the block-level logging in namenode is useful for debugging, it can 
 add a significant overhead to busy hdfs clusters since they are done while 
 the namespace write lock is held. One example is shown in HDFS-4075. In this 
 example, the write lock was held for 5 minutes while logging 11 million log 
 messages for 5.5 million block invalidation events. 
 It will be useful if we have an option to disable these block-level log 
 messages and keep other state change messages going.  If others feel that 
 they can turned into DEBUG (with addition of isDebugEnabled() checks), that 
 may also work too, but there might be people depending on the messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4075) Reduce recommissioning overhead

2012-10-31 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-4075:
-

Attachment: hdfs-4075.patch

I excluded the logging change since HDFS-4080 will take care of it. The patch 
lets the recommissioning skip the over-replication check for dead nodes and 
logs the total number overreplicated blocks per node.

I expect the future lock improvement will reduce the duration of namespace 
write locking.

 Reduce recommissioning overhead
 ---

 Key: HDFS-4075
 URL: https://issues.apache.org/jira/browse/HDFS-4075
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.4, 2.0.2-alpha
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hdfs-4075.patch, hdfs-4075.patch


 When datanodes are recommissioned, 
 {BlockManager#processOverReplicatedBlocksOnReCommission()} is called for each 
 rejoined node and excess blocks are added to the invalidate list. The problem 
 is this is done while the namesystem write lock is held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4046) ChecksumTypeProto use NULL as enum value which is illegal in C/C++

2012-10-31 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487881#comment-13487881
 ] 

Kihwal Lee commented on HDFS-4046:
--

+1 (nb) Looks good to me. The namespace declaration was added in HADOOP-8985 
and HDFS-4121

 ChecksumTypeProto use NULL as enum value which is illegal in C/C++
 --

 Key: HDFS-4046
 URL: https://issues.apache.org/jira/browse/HDFS-4046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Minor
 Attachments: HDFS-4046-ChecksumType-NULL-and-TestAuditLogs-bug.patch, 
 HDFS-4046-ChecksumType-NULL.patch, HDFS-4096-ChecksumTypeProto-NULL.patch


 I tried to write a native hdfs client using protobuf based protocol, when I 
 generate c++ code using hdfs.proto, the generated file can not compile, 
 because NULL is an already defined macro.
 I am thinking two solutions:
 1. refactor all DataChecksum.Type.NULL references to NONE, which should be 
 fine for all languages, but this may breaking compatibility.
 2. only change protobuf definition ChecksumTypeProto.NULL to NONE, and use 
 enum integer value(DataChecksum.Type.id) to convert between ChecksumTypeProto 
 and DataChecksum.Type, and make sure enum integer values are match(currently 
 already match).
 I can make a patch for solution 2.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4080) Add an option to disable block-level state change logging

2012-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487962#comment-13487962
 ] 

Hadoop QA commented on HDFS-4080:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12551558/hdfs-4080.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3433//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3433//console

This message is automatically generated.

 Add an option to disable block-level state change logging
 -

 Key: HDFS-4080
 URL: https://issues.apache.org/jira/browse/HDFS-4080
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: hdfs-4080.1.patch, hdfs-4080.patch, hdfs-4080.patch


 Although the block-level logging in namenode is useful for debugging, it can 
 add a significant overhead to busy hdfs clusters since they are done while 
 the namespace write lock is held. One example is shown in HDFS-4075. In this 
 example, the write lock was held for 5 minutes while logging 11 million log 
 messages for 5.5 million block invalidation events. 
 It will be useful if we have an option to disable these block-level log 
 messages and keep other state change messages going.  If others feel that 
 they can turned into DEBUG (with addition of isDebugEnabled() checks), that 
 may also work too, but there might be people depending on the messages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3804) TestHftpFileSystem fails intermittently with JDK7

2012-10-31 Thread Trevor Robinson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trevor Robinson updated HDFS-3804:
--

Attachment: HDFS-3804-3.patch

Sure, done.

 TestHftpFileSystem fails intermittently with JDK7
 -

 Key: HDFS-3804
 URL: https://issues.apache.org/jira/browse/HDFS-3804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HDFS-3804-2.patch, HDFS-3804-3.patch, HDFS-3804-3.patch, 
 HDFS-3804.patch, HDFS-3804.patch


 For example:
   testFileNameEncoding(org.apache.hadoop.hdfs.TestHftpFileSystem): Filesystem 
 closed
   testDataNodeRedirect(org.apache.hadoop.hdfs.TestHftpFileSystem): Filesystem 
 closed
 This test case sets up a filesystem that is used by the first half of the 
 test methods (in declaration order), but the second half of the tests start 
 by calling {{FileSystem.closeAll}}. With JDK7, test methods are run in an 
 arbitrary order, so if any first half methods run after any second half 
 methods, they fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4075) Reduce recommissioning overhead

2012-10-31 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487969#comment-13487969
 ] 

Daryn Sharp commented on HDFS-4075:
---

+1 I think it looks ok.  Eli, can you confirm?

 Reduce recommissioning overhead
 ---

 Key: HDFS-4075
 URL: https://issues.apache.org/jira/browse/HDFS-4075
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.4, 2.0.2-alpha
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hdfs-4075.patch, hdfs-4075.patch


 When datanodes are recommissioned, 
 {BlockManager#processOverReplicatedBlocksOnReCommission()} is called for each 
 rejoined node and excess blocks are added to the invalidate list. The problem 
 is this is done while the namesystem write lock is held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-10-31 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487972#comment-13487972
 ] 

Suresh Srinivas commented on HDFS-2802:
---

[~shv]Thanks for your comments. Some answers here:
versions seem like an interesting capability to support. I want to think about 
it more. However snapshots have their own use cases as well. Ability to 
snapshot and attach a name to it comes handy for writing applications using 
this functionality.

.snapshot is a convention that is used widely and requires no changes to many 
of the APIs such as rm, ls etc. In fact we had initially proposed using 
.snap_snapshot_name. After looking at many other file systems, where 
.snapshot convention is used to identify the snapshot, we decided to go with it.

bq. Creating duplicate INodes with a diff, this is sort of COW technique, 
right? Sounds hard.
Hopefully this should not add too much code complexity. But this is being done 
to avoid taking too much memory for snapshots on the namenode.

bq. My dumb question: can I create a snapshot of a subdirectory that is a part 
of a snapshot above it?
This is not a dumb question :-) The design already supports nested snapshots. I 
will add an update to the document describing this.
 

 Support for RW/RO snapshots in HDFS
 ---

 Key: HDFS-2802
 URL: https://issues.apache.org/jira/browse/HDFS-2802
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: HDFSSnapshotsDesign.pdf, snap.patch, 
 snapshot-one-pager.pdf, Snapshots20121018.pdf, Snapshots20121030.pdf


 Snapshots are point in time images of parts of the filesystem or the entire 
 filesystem. Snapshots can be a read-only or a read-write point in time copy 
 of the filesystem. There are several use cases for snapshots in HDFS. I will 
 post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4075) Reduce recommissioning overhead

2012-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487975#comment-13487975
 ] 

Hadoop QA commented on HDFS-4075:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12551560/hdfs-4075.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3434//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3434//console

This message is automatically generated.

 Reduce recommissioning overhead
 ---

 Key: HDFS-4075
 URL: https://issues.apache.org/jira/browse/HDFS-4075
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 0.23.4, 2.0.2-alpha
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Critical
 Attachments: hdfs-4075.patch, hdfs-4075.patch


 When datanodes are recommissioned, 
 {BlockManager#processOverReplicatedBlocksOnReCommission()} is called for each 
 rejoined node and excess blocks are added to the invalidate list. The problem 
 is this is done while the namesystem write lock is held.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4056) Always start the NN's SecretManager

2012-10-31 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487977#comment-13487977
 ] 

Daryn Sharp commented on HDFS-4056:
---

bq. To me, a cluster is configured to run in either token testing mode or 
production mode.

The original goal was to have only one code path so tokens are always used.  
Ie. there is no testing mode.  I've implemented PLAIN as a compromise but there 
is no harm in having the secret manager running if a client using SIMPLE auth 
choses to use tokens.

bq.  IMO, they make the Client and Server less intelligent in the sense that 
they don't recognize situations they used to recognize. I'm not sure their new 
behavior is desirable. For example, Client will always look for token and try 
to use it if found, even if configuration says otherwise.

I don't understand this objection.  If a token is available, why not use it?  
Under what scenario do you envision a client, for any external auth, requesting 
a token and then not wanting to use it?  If a cluster not using tokens wants to 
talk to a cluster requiring tokens, then doesn't it have to send the token 
regardless of the local config?

 Always start the NN's SecretManager
 ---

 Key: HDFS-4056
 URL: https://issues.apache.org/jira/browse/HDFS-4056
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4056.patch


 To support the ability to use tokens regardless of whether kerberos is 
 enabled, the NN's secret manager should always be started.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-10-31 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13487983#comment-13487983
 ] 

Suresh Srinivas commented on HDFS-2802:
---

bq. I went ahead and scheduled this...

[~eli]I did not schedule it the week of Hadoop World even though it was 
convenient for me and Nicholas. I have shown sensitivity towards your team's 
travels. Unfortunately I and Nicholas are scheduled for a travel this week. 
Even though I had indicated that I will schedule a meeting, instead of giving 
me your feedback on time, you have gone ahead and scheduled this meeting 
without consulting the availability of the main authors of this jira. I am not 
sure what you are trying to achieve here.

My suggestion is to cancel the first meeting and stick with the second meeting. 
If the time for the second meeting does not work, Sanjay could reschedule it.



 Support for RW/RO snapshots in HDFS
 ---

 Key: HDFS-2802
 URL: https://issues.apache.org/jira/browse/HDFS-2802
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: HDFSSnapshotsDesign.pdf, snap.patch, 
 snapshot-one-pager.pdf, Snapshots20121018.pdf, Snapshots20121030.pdf


 Snapshots are point in time images of parts of the filesystem or the entire 
 filesystem. Snapshots can be a read-only or a read-write point in time copy 
 of the filesystem. There are several use cases for snapshots in HDFS. I will 
 post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-10-31 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488074#comment-13488074
 ] 

Eli Collins commented on HDFS-2802:
---

Hey Suresh,

This isn't the one and only discussion on snapshots, a lot of us plan to attend 
the following one as well. It's OK to have multiple discussions. A bunch of 
people are going to be around and discussing things on Thursday anyway, so I 
thought it better to open up the discussion to more people in the community. 
I'm sorry that you'll be out of town, I've setup a dial in and would love it if 
you and others that are out could attend. I'll post minutes here so that people 
who can not attend can see and participate.

 Support for RW/RO snapshots in HDFS
 ---

 Key: HDFS-2802
 URL: https://issues.apache.org/jira/browse/HDFS-2802
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: HDFSSnapshotsDesign.pdf, snap.patch, 
 snapshot-one-pager.pdf, Snapshots20121018.pdf, Snapshots20121030.pdf


 Snapshots are point in time images of parts of the filesystem or the entire 
 filesystem. Snapshots can be a read-only or a read-write point in time copy 
 of the filesystem. There are several use cases for snapshots in HDFS. I will 
 post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3804) TestHftpFileSystem fails intermittently with JDK7

2012-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488075#comment-13488075
 ] 

Hadoop QA commented on HDFS-3804:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12551578/HDFS-3804-3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3435//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3435//console

This message is automatically generated.

 TestHftpFileSystem fails intermittently with JDK7
 -

 Key: HDFS-3804
 URL: https://issues.apache.org/jira/browse/HDFS-3804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Attachments: HDFS-3804-2.patch, HDFS-3804-3.patch, HDFS-3804-3.patch, 
 HDFS-3804.patch, HDFS-3804.patch


 For example:
   testFileNameEncoding(org.apache.hadoop.hdfs.TestHftpFileSystem): Filesystem 
 closed
   testDataNodeRedirect(org.apache.hadoop.hdfs.TestHftpFileSystem): Filesystem 
 closed
 This test case sets up a filesystem that is used by the first half of the 
 test methods (in declaration order), but the second half of the tests start 
 by calling {{FileSystem.closeAll}}. With JDK7, test methods are run in an 
 arbitrary order, so if any first half methods run after any second half 
 methods, they fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-1331) dfs -test should work like /bin/test

2012-10-31 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HDFS-1331:


Status: Open  (was: Patch Available)

re-submitting patch to try to kick hadoopqa into action

 dfs -test should work like /bin/test
 

 Key: HDFS-1331
 URL: https://issues.apache.org/jira/browse/HDFS-1331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.0.2-alpha, 0.20.2, 3.0.0
Reporter: Allen Wittenauer
Assignee: Andy Isaacson
Priority: Minor
 Attachments: hdfs1331-2.txt, hdfs1331.txt, 
 hdfs1331-with-hadoop8994.txt


 hadoop dfs -test doesn't act like its shell equivalent, making it difficult 
 to actually use if you are used to the real test command:
 hadoop:
 $hadoop dfs -test -d /nonexist; echo $?
 test: File does not exist: /nonexist
 255
 shell:
 $ test -d /nonexist; echo $?
 1
 a) Why is it spitting out a message? Even so, why is it saying file instead 
 of directory when I used -d?
 b) Why is the return code 255? I realize this is documented as '0' if true.  
 But docs basically say the value is undefined if it isn't.
 c) where is -f?
 d) Why is empty -z instead of -s ?  Was it a misunderstanding of the man page?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-1331) dfs -test should work like /bin/test

2012-10-31 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HDFS-1331:


Status: Patch Available  (was: Open)

 dfs -test should work like /bin/test
 

 Key: HDFS-1331
 URL: https://issues.apache.org/jira/browse/HDFS-1331
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.0.2-alpha, 0.20.2, 3.0.0
Reporter: Allen Wittenauer
Assignee: Andy Isaacson
Priority: Minor
 Attachments: hdfs1331-2.txt, hdfs1331.txt, 
 hdfs1331-with-hadoop8994.txt


 hadoop dfs -test doesn't act like its shell equivalent, making it difficult 
 to actually use if you are used to the real test command:
 hadoop:
 $hadoop dfs -test -d /nonexist; echo $?
 test: File does not exist: /nonexist
 255
 shell:
 $ test -d /nonexist; echo $?
 1
 a) Why is it spitting out a message? Even so, why is it saying file instead 
 of directory when I used -d?
 b) Why is the return code 255? I realize this is documented as '0' if true.  
 But docs basically say the value is undefined if it isn't.
 c) where is -f?
 d) Why is empty -z instead of -s ?  Was it a misunderstanding of the man page?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-10-31 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488112#comment-13488112
 ] 

Konstantin Shvachko commented on HDFS-2802:
---

 After looking at many other file systems, where .snapshot convention 

Can you send a link?
My concern is that you have a directory dr with two subdirectories sd2 and sd3. 
You create a snapshot of sd2 and sd3 under the same name. Then create a 
snapshot of dr with the same name. What happens?
If the system controls ids this never happens because they are unique.

 Support for RW/RO snapshots in HDFS
 ---

 Key: HDFS-2802
 URL: https://issues.apache.org/jira/browse/HDFS-2802
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: HDFSSnapshotsDesign.pdf, snap.patch, 
 snapshot-one-pager.pdf, Snapshots20121018.pdf, Snapshots20121030.pdf


 Snapshots are point in time images of parts of the filesystem or the entire 
 filesystem. Snapshots can be a read-only or a read-write point in time copy 
 of the filesystem. There are several use cases for snapshots in HDFS. I will 
 post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4056) Always start the NN's SecretManager

2012-10-31 Thread Kan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488141#comment-13488141
 ] 

Kan Zhang commented on HDFS-4056:
-

bq. ... there is no harm in having the secret manager running if a client using 
SIMPLE auth choses to use tokens.

That would be very confusing and I certainly don't want to see it happen in my  
production cluster. I may configure a cluster to use either SIMPLE + SIMPLE or 
SIMPLE + TOKEN for different purposes, but I don't see why I want to mix them 
up, especially at production (there is no security benefit, only overhead).

bq. I don't understand this objection. If a token is available, why not use it?

If the conf says SIMPLE + SIMPLE, then even if a token is present, it shouldn't 
be used. That token may be from a stale token file. RPC Client shouldn't even 
waste its time looking for tokens.

bq. If a cluster not using tokens wants to talk to a cluster requiring tokens, 
then doesn't it have to send the token regardless of the local config?

That's a different issue. I'm not sure we support an insecure cluster to talk 
to a secure one right now. Is that your goal?

 Always start the NN's SecretManager
 ---

 Key: HDFS-4056
 URL: https://issues.apache.org/jira/browse/HDFS-4056
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4056.patch


 To support the ability to use tokens regardless of whether kerberos is 
 enabled, the NN's secret manager should always be started.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-10-31 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488234#comment-13488234
 ] 

Suresh Srinivas commented on HDFS-2802:
---

bq. Can you send a link?
NetApp uses this - search on google, you will find many other file systems such 
as HP X9000 File System etc.

bq. My concern is that you have a directory dr with two subdirectories sd2 and 
sd3. You create a snapshot of sd2 and sd3 under the same name. Then create a 
snapshot of dr with the same name. What happens?
The design should work fine. Every snapshot has the following:
- A directory where the snapshot is created
- A name that is unique for a given directory where snapshot is taken
- Internally snapshot dir + name is mapped to a unique snapshot ID and the 
diffs are maintained against that snapshot ID.

When snapshot is accessed, the path used provides snapshotted directory and 
snapshot name. It gets mapped to a unique snapshot ID. For this we can get to 
snapshotted version.

 Support for RW/RO snapshots in HDFS
 ---

 Key: HDFS-2802
 URL: https://issues.apache.org/jira/browse/HDFS-2802
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: HDFSSnapshotsDesign.pdf, snap.patch, 
 snapshot-one-pager.pdf, Snapshots20121018.pdf, Snapshots20121030.pdf


 Snapshots are point in time images of parts of the filesystem or the entire 
 filesystem. Snapshots can be a read-only or a read-write point in time copy 
 of the filesystem. There are several use cases for snapshots in HDFS. I will 
 post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4045) SecondaryNameNode cannot read from QuorumJournal URI

2012-10-31 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488246#comment-13488246
 ] 

Colin Patrick McCabe commented on HDFS-4045:


I guess we have to think about whether the overhead of avoiding an extra 
deserialize + serialize pair on the JN is worth violating the 
EditLogInputStream abstraction.  Unfortunately, I can't think of a good way to 
get the benefits of {{RedundantEditLogInputStream}} without using its intended 
interface-- {{readOp}}.

If you do choose to violate the abstraction, it would be best to do so by 
creating FileInputStream objects for the underlying edit log files, when 
appropriate.  Then just read directly from the files.  You'll still have a hard 
time dealing with IOExceptions in the middle of reading the file.  It wouldn't 
be *too* hard to do that failover yourself, but you run into sticky situations. 
 What if you get an IOExecption, fail over to the next file, and then discover 
that it's actually fairly different from the first one?  You might have sent 
bytes over the wire that completely fail the edit log operation checksums.

I think it's best to punt on the issue of zero-copy {{serveEdits}} for now.  
Maybe open up a follow-up JIRA for that.  Just do the simple thing of streaming 
it for now.  I also wonder whether serveEdits should serve a range of edits, 
rather than discrete edit log segments?  The API feels very segment-oriented, 
and I thought we were trying to get away from that when possible.

 SecondaryNameNode cannot read from QuorumJournal URI
 

 Key: HDFS-4045
 URL: https://issues.apache.org/jira/browse/HDFS-4045
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: name-node
Affects Versions: 3.0.0
Reporter: Vinithra Varadharajan
Assignee: Andy Isaacson
 Attachments: hdfs-4045-2.txt, hdfs4045-3.txt, hdfs-4045.txt


 If HDFS is set up in basic mode (non-HA) with QuorumJournal, and the 
 dfs.namenode.edits.dir is set to only the QuorumJournal URI and no local dir, 
 the SecondaryNameNode is unable to do a checkpoint.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4056) Always start the NN's SecretManager

2012-10-31 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488252#comment-13488252
 ] 

Daryn Sharp commented on HDFS-4056:
---

bq. I'm not sure we support an insecure cluster to talk to a secure one right 
now.

Yes, it should work if you fetch the token yourself.  For clarification, the 
way it currently works:
* Insecure cluster:
** NN always returns no token (null) if client asks for one
** Client correctly handles null response as security disabled
** Client passes token, if present, in subsequent connections - tokens 
currently will never be present, unless fetchdt is used to obtain one from a 
secure cluster
** If a client attempts to use a token, the NN tells it to revert to SIMPLE
* Secure cluster:
** NN returns token if:
**# client is kerberos, else throws exception for other auths (like token)
**# if the secret manager is running, else returns null like an insecure cluster
** Client passes token, if present, in subsequent connections

This allows secure and insecure clusters to interoperate under the right 
conditions.  The main difference is insecure clients only fetch tokens if 
explicitly requested, whereas secure clients automatically attempt to fetch 
tokens.  Both always send tokens if available, and both interpret no token from 
an NN to mean security is disabled.

bq. That [the secret manager running?] would be very confusing and I certainly 
don't want to see it happen in my production cluster. I may configure a cluster 
to use either SIMPLE + SIMPLE or SIMPLE + TOKEN for different purposes, but I 
don't see why I want to mix them up, especially at production (there is no 
security benefit, only overhead).

What would be the confusion?  With this patch alone, there is no change and 
zero overhead for clients of an insecure cluster.  Clients from a secure 
cluster will however be able to request and use tokens on the insecure cluster.

bq. If the conf says SIMPLE + SIMPLE, then even if a token is present, it 
shouldn't be used. That token may be from a stale token file. RPC Client 
shouldn't even waste its time looking for tokens.

Trust me, I've dealt with so many token issues that my poor wife knows what 
they are. :)  I don't recall a problem ever being root-caused to a stale token 
file.  The overhead of the RPC client looking for a token in an empty 
collection in the UGI is negligible/moot.

Although there is no security benefit for an insecure cluster, the huge benefit 
is to the hadoop codebase by moving towards the elimination of multiple code 
paths related to security.  We can decrease the complexity and increase the 
code coverage.  Most importantly, we can remove the fragility caused by people 
accidentally breaking tokens because they have no access to a secure cluster.

 Always start the NN's SecretManager
 ---

 Key: HDFS-4056
 URL: https://issues.apache.org/jira/browse/HDFS-4056
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4056.patch


 To support the ability to use tokens regardless of whether kerberos is 
 enabled, the NN's secret manager should always be started.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4108) In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node list , gives an error

2012-10-31 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HDFS-4108:
---

Attachment: HDFS-4108-1-1.patch

Attaching the patch based on the MAPREDUCE-4661 codebase.
It also keeps the old behavior  with insecure cluster.


 In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node 
 list , gives an error
 -

 Key: HDFS-4108
 URL: https://issues.apache.org/jira/browse/HDFS-4108
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security, webhdfs
Affects Versions: 1.1.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Attachments: HDFS-4108-1-1.patch, HDFS-4108-1-1.patch


 This issue happens in secure cluster.
 To reproduce :
 Go to the NameNode WEB UI. (dfshealth.jsp)
 Click to bring up the list of LiveNodes  (dfsnodelist.jsp)
 Click on a datanode to bring up the filesystem  web page ( 
 browsedirectory.jsp)
 The page containing the directory listing does not come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4108) In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node list , gives an error

2012-10-31 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488331#comment-13488331
 ] 

Benoy Antony commented on HDFS-4108:


When NN is in safe-mode, the links will not work as no delegation token can be 
generated. It throws an exception indicating that NN is in safemode.


 In a secure cluster, in the HDFS WEBUI , clicking on a datanode in the node 
 list , gives an error
 -

 Key: HDFS-4108
 URL: https://issues.apache.org/jira/browse/HDFS-4108
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security, webhdfs
Affects Versions: 1.1.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Attachments: HDFS-4108-1-1.patch, HDFS-4108-1-1.patch


 This issue happens in secure cluster.
 To reproduce :
 Go to the NameNode WEB UI. (dfshealth.jsp)
 Click to bring up the list of LiveNodes  (dfsnodelist.jsp)
 Click on a datanode to bring up the filesystem  web page ( 
 browsedirectory.jsp)
 The page containing the directory listing does not come up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4132) when libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized memory

2012-10-31 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-4132:
--

 Summary: when libwebhdfs is not enabled, nativeMiniDfsClient frees 
uninitialized memory 
 Key: HDFS-4132
 URL: https://issues.apache.org/jira/browse/HDFS-4132
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


When libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized memory.

Details: jconfStr is declared uninitialized...
{code}
struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
{
struct NativeMiniDfsCluster* cl = NULL;
jobject bld = NULL, bld2 = NULL, cobj = NULL;
jvalue  val;
JNIEnv *env = getJNIEnv();
jthrowable jthr;
jstring jconfStr;
{code}

and only initialized later if conf-webhdfsEnabled:
{code}
...
if (conf-webhdfsEnabled) {
jthr = newJavaStr(env, DFS_WEBHDFS_ENABLED_KEY, jconfStr);
if (jthr) {
printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
...
{code}

Then we try to free this uninitialized memory at the end, usually resulting in 
a crash.
{code}
(*env)-DeleteLocalRef(env, jconfStr);
return cl;
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4132) when libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized memory

2012-10-31 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4132:
---

Attachment: HDFS-4132.001.patch

 when libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized 
 memory 
 ---

 Key: HDFS-4132
 URL: https://issues.apache.org/jira/browse/HDFS-4132
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-4132.001.patch


 When libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized 
 memory.
 Details: jconfStr is declared uninitialized...
 {code}
 struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
 {
 struct NativeMiniDfsCluster* cl = NULL;
 jobject bld = NULL, bld2 = NULL, cobj = NULL;
 jvalue  val;
 JNIEnv *env = getJNIEnv();
 jthrowable jthr;
 jstring jconfStr;
 {code}
 and only initialized later if conf-webhdfsEnabled:
 {code}
 ...
 if (conf-webhdfsEnabled) {
 jthr = newJavaStr(env, DFS_WEBHDFS_ENABLED_KEY, jconfStr);
 if (jthr) {
 printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
 ...
 {code}
 Then we try to free this uninitialized memory at the end, usually resulting 
 in a crash.
 {code}
 (*env)-DeleteLocalRef(env, jconfStr);
 return cl;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4132) when libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized memory

2012-10-31 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4132:
---

Status: Patch Available  (was: Open)

 when libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized 
 memory 
 ---

 Key: HDFS-4132
 URL: https://issues.apache.org/jira/browse/HDFS-4132
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-4132.001.patch


 When libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized 
 memory.
 Details: jconfStr is declared uninitialized...
 {code}
 struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
 {
 struct NativeMiniDfsCluster* cl = NULL;
 jobject bld = NULL, bld2 = NULL, cobj = NULL;
 jvalue  val;
 JNIEnv *env = getJNIEnv();
 jthrowable jthr;
 jstring jconfStr;
 {code}
 and only initialized later if conf-webhdfsEnabled:
 {code}
 ...
 if (conf-webhdfsEnabled) {
 jthr = newJavaStr(env, DFS_WEBHDFS_ENABLED_KEY, jconfStr);
 if (jthr) {
 printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
 ...
 {code}
 Then we try to free this uninitialized memory at the end, usually resulting 
 in a crash.
 {code}
 (*env)-DeleteLocalRef(env, jconfStr);
 return cl;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4132) when libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized memory

2012-10-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488379#comment-13488379
 ] 

Hadoop QA commented on HDFS-4132:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12551642/HDFS-4132.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3436//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3436//console

This message is automatically generated.

 when libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized 
 memory 
 ---

 Key: HDFS-4132
 URL: https://issues.apache.org/jira/browse/HDFS-4132
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-4132.001.patch


 When libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized 
 memory.
 Details: jconfStr is declared uninitialized...
 {code}
 struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
 {
 struct NativeMiniDfsCluster* cl = NULL;
 jobject bld = NULL, bld2 = NULL, cobj = NULL;
 jvalue  val;
 JNIEnv *env = getJNIEnv();
 jthrowable jthr;
 jstring jconfStr;
 {code}
 and only initialized later if conf-webhdfsEnabled:
 {code}
 ...
 if (conf-webhdfsEnabled) {
 jthr = newJavaStr(env, DFS_WEBHDFS_ENABLED_KEY, jconfStr);
 if (jthr) {
 printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
 ...
 {code}
 Then we try to free this uninitialized memory at the end, usually resulting 
 in a crash.
 {code}
 (*env)-DeleteLocalRef(env, jconfStr);
 return cl;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4132) when libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized memory

2012-10-31 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488387#comment-13488387
 ] 

Jing Zhao commented on HDFS-4132:
-

That's bug brought by HDFS-3923. Thanks for the fix Colin!

 when libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized 
 memory 
 ---

 Key: HDFS-4132
 URL: https://issues.apache.org/jira/browse/HDFS-4132
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-4132.001.patch


 When libwebhdfs is not enabled, nativeMiniDfsClient frees uninitialized 
 memory.
 Details: jconfStr is declared uninitialized...
 {code}
 struct NativeMiniDfsCluster* nmdCreate(struct NativeMiniDfsConf *conf)
 {
 struct NativeMiniDfsCluster* cl = NULL;
 jobject bld = NULL, bld2 = NULL, cobj = NULL;
 jvalue  val;
 JNIEnv *env = getJNIEnv();
 jthrowable jthr;
 jstring jconfStr;
 {code}
 and only initialized later if conf-webhdfsEnabled:
 {code}
 ...
 if (conf-webhdfsEnabled) {
 jthr = newJavaStr(env, DFS_WEBHDFS_ENABLED_KEY, jconfStr);
 if (jthr) {
 printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
 ...
 {code}
 Then we try to free this uninitialized memory at the end, usually resulting 
 in a crash.
 {code}
 (*env)-DeleteLocalRef(env, jconfStr);
 return cl;
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4133) Add testcases for testing basic snapshot functionalities

2012-10-31 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4133:
---

 Summary: Add testcases for testing basic snapshot functionalities
 Key: HDFS-4133
 URL: https://issues.apache.org/jira/browse/HDFS-4133
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


Add testcase for basic snapshot functionalities. In the test we keep creating 
snapshots, modifying original files, and check previous snapshots.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4133) Add testcases for testing basic snapshot functionalities

2012-10-31 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4133:


Attachment: HDFS-4133.001.patch

Initial patch. Also rename original TestSnapshot.java to 
TestSnapshotPathINodes.java

 Add testcases for testing basic snapshot functionalities
 

 Key: HDFS-4133
 URL: https://issues.apache.org/jira/browse/HDFS-4133
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: data-node, name-node
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-4133.001.patch


 Add testcase for basic snapshot functionalities. In the test we keep creating 
 snapshots, modifying original files, and check previous snapshots.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4056) Always start the NN's SecretManager

2012-10-31 Thread Kan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488438#comment-13488438
 ] 

Kan Zhang commented on HDFS-4056:
-

bq. Yes, it should work if you fetch the token yourself.

As far as I know, that is not by design.

bq. What would be the confusion?

Suppose some clients are configured to use tokens, some don't. How do you make 
sure they did what they are supposed to do? Or you don't care? If a client 
happens to use a Hadoop conf that says use tokens, it will fetch and use 
them; otherwise, it won't. Either way it works. It's hard to reason about 
cluster behavior, that is what I meant by confusing.

bq. I don't recall a problem ever being root-caused to a stale token file.

May I quote Past performance is no guarantee of future results? :)

 Always start the NN's SecretManager
 ---

 Key: HDFS-4056
 URL: https://issues.apache.org/jira/browse/HDFS-4056
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: name-node
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Attachments: HDFS-4056.patch


 To support the ability to use tokens regardless of whether kerberos is 
 enabled, the NN's secret manager should always be started.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-10-31 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-2802:
-

Attachment: snapshot-design.tex
snapshot-design.pdf

Hello all, attached please find an updated design proposal which attempts to 
merge the two designs as I previously described in [this 
comment|https://issues.apache.org/jira/browse/HDFS-2802?focusedCommentId=13485761page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13485761].
 This hybrid design merges the concepts of a snapshottable directory and 
user-initiated snapshots from the designs posted by Nicholas and Suresh, with 
the NN snapshot representation and client materialization of snapshots from the 
earlier design I posted. This document also expands upon the CLI commands/API 
that would be supported, as well as provide more details on how users would be 
able to access the snapshots available to them.

Please have a look and review. I'd appreciate any feedback you have.

I'm also uploading the .tex file used to produce this document and have removed 
the author listing. I hope this facilitates collaborating on a single design 
that we can eventually all agree to.

 Support for RW/RO snapshots in HDFS
 ---

 Key: HDFS-2802
 URL: https://issues.apache.org/jira/browse/HDFS-2802
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: HDFSSnapshotsDesign.pdf, snap.patch, 
 snapshot-design.pdf, snapshot-design.tex, snapshot-one-pager.pdf, 
 Snapshots20121018.pdf, Snapshots20121030.pdf


 Snapshots are point in time images of parts of the filesystem or the entire 
 filesystem. Snapshots can be a read-only or a read-write point in time copy 
 of the filesystem. There are several use cases for snapshots in HDFS. I will 
 post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2802) Support for RW/RO snapshots in HDFS

2012-10-31 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13488489#comment-13488489
 ] 

Aaron T. Myers commented on HDFS-2802:
--

Hi Konstantin,

bq. I like the idea of generating globally unique version ids, and assigning 
them to snapshots internally rather than letting people invent their own. One 
can always list available versions and read the desired one. So the 
-createSnapshot command does not need to pass snapname, but will instead get 
it in return.

Please take a look at the updated design document I just uploaded. It goes into 
a lot more detail about how snapshots would be named and accessed by users. 
From my understanding of you're suggestion, I think it describes pretty much 
exactly what you propose with regard to this.

 Support for RW/RO snapshots in HDFS
 ---

 Key: HDFS-2802
 URL: https://issues.apache.org/jira/browse/HDFS-2802
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: data-node, name-node
Reporter: Hari Mankude
Assignee: Hari Mankude
 Attachments: HDFSSnapshotsDesign.pdf, snap.patch, 
 snapshot-design.pdf, snapshot-design.tex, snapshot-one-pager.pdf, 
 Snapshots20121018.pdf, Snapshots20121030.pdf


 Snapshots are point in time images of parts of the filesystem or the entire 
 filesystem. Snapshots can be a read-only or a read-write point in time copy 
 of the filesystem. There are several use cases for snapshots in HDFS. I will 
 post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira