[jira] [Commented] (HDFS-4741) TestStorageRestore#testStorageRestoreFailure fails on Windows

2013-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642641#comment-13642641
 ] 

Hadoop QA commented on HDFS-4741:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580667/HADOOP-4741.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4322//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4322//console

This message is automatically generated.

> TestStorageRestore#testStorageRestoreFailure fails on Windows
> -
>
> Key: HDFS-4741
> URL: https://issues.apache.org/jira/browse/HDFS-4741
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: HADOOP-4741.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-04-26 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HDFS-2576:
--

Attachment: hdfs-2576-trunk-8.3.patch

I had missed a one line change in my previous patch. Here is the change that 
should have been there:
{noformat}
-resolveNetworkLocation(nodeDescr);
+nodeDescr.setNetworkLocation(resolveNetworkLocation(nodeDescr));
{noformat}
Absence of the above change led to the test failures. This patch is with the 
above change.

> Namenode should have a favored nodes hint to enable clients to have control 
> over block placement.
> -
>
> Key: HDFS-2576
> URL: https://issues.apache.org/jira/browse/HDFS-2576
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: Devaraj Das
> Fix For: 2.0.5-beta
>
> Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch, 
> hdfs-2576-trunk-2.patch, hdfs-2576-trunk-7.1.patch, hdfs-2576-trunk-7.patch, 
> hdfs-2576-trunk-8.1.patch, hdfs-2576-trunk-8.2.patch, 
> hdfs-2576-trunk-8.3.patch, hdfs-2576-trunk-8.patch
>
>
> Sometimes Clients like HBase are required to dynamically compute the 
> datanodes it wishes to place the blocks for a file for higher level of 
> locality. For this purpose there is a need of a way to give the Namenode a 
> hint in terms of a favoredNodes parameter about the locations where the 
> client wants to put each block. The proposed solution is a favored nodes 
> parameter in the addBlock() method and in the create() file method to enable 
> the clients to give the hints to the NameNode about the locations of each 
> replica of the block. Note that this would be just a hint and finally the 
> NameNode would look at disk usage, datanode load etc. and decide whether it 
> can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4761) Refresh INodeMap in FSDirectory#reset()

2013-04-26 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-4761:
---

 Summary: Refresh INodeMap in FSDirectory#reset()
 Key: HDFS-4761
 URL: https://issues.apache.org/jira/browse/HDFS-4761
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor


When resetting FSDirectory, the inodeMap should also be reset. I.e., we should 
clear the inodeMap and then put in the new root node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4761) Refresh INodeMap in FSDirectory#reset()

2013-04-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4761:


Attachment: HDFS-4761.001.patch

A simple fix.

> Refresh INodeMap in FSDirectory#reset()
> ---
>
> Key: HDFS-4761
> URL: https://issues.apache.org/jira/browse/HDFS-4761
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-4761.001.patch
>
>
> When resetting FSDirectory, the inodeMap should also be reset. I.e., we 
> should clear the inodeMap and then put in the new root node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4712) New libhdfs method hdfsGetDataNodes

2013-04-26 Thread andrea manzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642667#comment-13642667
 ] 

andrea manzi commented on HDFS-4712:


Yes i think this is what i need! i'm going to adapt my code and attach it to 
the ticket.
thanks a lot
Andrea


> New libhdfs method hdfsGetDataNodes
> ---
>
> Key: HDFS-4712
> URL: https://issues.apache.org/jira/browse/HDFS-4712
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: libhdfs
>Reporter: andrea manzi
>
> we have implemented a possible extension to libhdfs to retrieve information 
> about the available datanodes ( there was a mail on the hadoop-hdsf-dev 
> mailing list initially abut this :
> http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201204.mbox/%3CCANhO-
> s0mvororrxpjnjbql6brkj4c7l+u816xkdc+2r0whj...@mail.gmail.com%3E)
> i would like to know how to proceed to create a patch, cause on the wiki 
> http://wiki.apache.org/hadoop/HowToContribute i can see info about JAVA 
> patches but nothing related to extensions in C.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642706#comment-13642706
 ] 

Hadoop QA commented on HDFS-2576:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12580672/hdfs-2576-trunk-8.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4323//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4323//console

This message is automatically generated.

> Namenode should have a favored nodes hint to enable clients to have control 
> over block placement.
> -
>
> Key: HDFS-2576
> URL: https://issues.apache.org/jira/browse/HDFS-2576
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Pritam Damania
>Assignee: Devaraj Das
> Fix For: 2.0.5-beta
>
> Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch, 
> hdfs-2576-trunk-2.patch, hdfs-2576-trunk-7.1.patch, hdfs-2576-trunk-7.patch, 
> hdfs-2576-trunk-8.1.patch, hdfs-2576-trunk-8.2.patch, 
> hdfs-2576-trunk-8.3.patch, hdfs-2576-trunk-8.patch
>
>
> Sometimes Clients like HBase are required to dynamically compute the 
> datanodes it wishes to place the blocks for a file for higher level of 
> locality. For this purpose there is a need of a way to give the Namenode a 
> hint in terms of a favoredNodes parameter about the locations where the 
> client wants to put each block. The proposed solution is a favored nodes 
> parameter in the addBlock() method and in the create() file method to enable 
> the clients to give the hints to the NameNode about the locations of each 
> replica of the block. Note that this would be just a hint and finally the 
> NameNode would look at disk usage, datanode load etc. and decide whether it 
> can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4757) Update FSDirectory#inodeMap when replacing an INodeDirectory while setting quota

2013-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642733#comment-13642733
 ] 

Hudson commented on HDFS-4757:
--

Integrated in Hadoop-Yarn-trunk #195 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/195/])
HDFS-4757. Update FSDirectory#inodeMap when replacing an INodeDirectory 
while setting quota.  Contributed by Jing Zhao (Revision 1476005)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1476005
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java


> Update FSDirectory#inodeMap when replacing an INodeDirectory while setting 
> quota
> 
>
> Key: HDFS-4757
> URL: https://issues.apache.org/jira/browse/HDFS-4757
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-4757.001.patch
>
>
> When setting quota to a directory, we may need to replace the original 
> directory node with a new node with the same id. We need to update the 
> inodeMap after the node replacement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4650) Add rename test in TestSnapshot

2013-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642817#comment-13642817
 ] 

Hudson commented on HDFS-4650:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #169 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/169/])
HDFS-4650. Fix a bug in FSDirectory and add more unit tests for rename with 
existence of snapshottable directories and snapshots.  Contributed by Jing Zhao 
(Revision 1476012)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1476012
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshot.java


> Add rename test in TestSnapshot
> ---
>
> Key: HDFS-4650
> URL: https://issues.apache.org/jira/browse/HDFS-4650
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4650.001.patch
>
>
> Add more unit tests and update current unit tests to cover different cases 
> for rename with existence of snapshottable directories and snapshots.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4757) Update FSDirectory#inodeMap when replacing an INodeDirectory while setting quota

2013-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642811#comment-13642811
 ] 

Hudson commented on HDFS-4757:
--

Integrated in Hadoop-Hdfs-trunk #1384 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1384/])
HDFS-4757. Update FSDirectory#inodeMap when replacing an INodeDirectory 
while setting quota.  Contributed by Jing Zhao (Revision 1476005)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1476005
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java


> Update FSDirectory#inodeMap when replacing an INodeDirectory while setting 
> quota
> 
>
> Key: HDFS-4757
> URL: https://issues.apache.org/jira/browse/HDFS-4757
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-4757.001.patch
>
>
> When setting quota to a directory, we may need to replace the original 
> directory node with a new node with the same id. We need to update the 
> inodeMap after the node replacement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4742) Fix appending to a renamed file with snapshot

2013-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642815#comment-13642815
 ] 

Hudson commented on HDFS-4742:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #169 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/169/])
HDFS-4742. Fix appending to a renamed file with snapshot.  Contributed by 
Jing Zhao (Revision 1475903)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1475903
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeWithAdditionalFields.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LeaseManager.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


> Fix appending to a renamed file with snapshot
> -
>
> Key: HDFS-4742
> URL: https://issues.apache.org/jira/browse/HDFS-4742
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4742.001.patch
>
>
> Fix bug for appending a renamed file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4712) New libhdfs method hdfsGetDataNodes

2013-04-26 Thread andrea manzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

andrea manzi updated HDFS-4712:
---

Attachment: hdfs.h.diff
hdfs.c.diff

the changes i have applied both to hdfs.h and hdfs.c. I have implemented the 
call to method  getDataNodeStats which takes as argument the value ALL, DEAD or 
LIVE as declared in the enum HdfsConstants$DatanodeReportType.



> New libhdfs method hdfsGetDataNodes
> ---
>
> Key: HDFS-4712
> URL: https://issues.apache.org/jira/browse/HDFS-4712
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: libhdfs
>Reporter: andrea manzi
> Attachments: hdfs.c.diff, hdfs.h.diff
>
>
> we have implemented a possible extension to libhdfs to retrieve information 
> about the available datanodes ( there was a mail on the hadoop-hdsf-dev 
> mailing list initially abut this :
> http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201204.mbox/%3CCANhO-
> s0mvororrxpjnjbql6brkj4c7l+u816xkdc+2r0whj...@mail.gmail.com%3E)
> i would like to know how to proceed to create a patch, cause on the wiki 
> http://wiki.apache.org/hadoop/HowToContribute i can see info about JAVA 
> patches but nothing related to extensions in C.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4755) AccessControlException message is changed in snapshot branch

2013-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642816#comment-13642816
 ] 

Hudson commented on HDFS-4755:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #169 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/169/])
HDFS-4755. Fix AccessControlException message and moves "implements 
LinkedElement" from INode to INodeWithAdditionalFields. (Revision 1476009)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1476009
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeWithAdditionalFields.java


> AccessControlException message is changed in snapshot branch
> 
>
> Key: HDFS-4755
> URL: https://issues.apache.org/jira/browse/HDFS-4755
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>Priority: Minor
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: h4755_20130425.patch
>
>
> [~rramya] observed the following
> - Trunk:
> mkdir: org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=hrt_qa, access=WRITE, inode="hdfs":hdfs:hdfs:rwx-x-x
> - Snapshot branch:
> mkdir: org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=hrt_qa, access=WRITE, inode=/user/hdfs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4749) Use INodeId to identify the corresponding directory node for FSImage saving/loading

2013-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642818#comment-13642818
 ] 

Hudson commented on HDFS-4749:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #169 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/169/])
HDFS-4749. Use INodeId to identify the corresponding directory node in 
FSImage saving/loading.  Contributed by Jing Zhao (Revision 1475902)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1475902
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectoryWithSnapshot.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java


> Use INodeId to identify the corresponding directory node for FSImage 
> saving/loading
> ---
>
> Key: HDFS-4749
> URL: https://issues.apache.org/jira/browse/HDFS-4749
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4749.000.patch
>
>
> Currently in fsimage, we use the path to locate a directory node for later 
> loading, i.e., when loading a subtree from fsimage, we first read the path of 
> the directory node, and resolve the path to identify the directory node. This 
> brings extra complexity since we need to generate path for directory nodes in 
> both the current tree and snapshot copies.
> As a simplification, we can use INodeId to identify the directory node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4757) Update FSDirectory#inodeMap when replacing an INodeDirectory while setting quota

2013-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642871#comment-13642871
 ] 

Hudson commented on HDFS-4757:
--

Integrated in Hadoop-Mapreduce-trunk #1411 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1411/])
HDFS-4757. Update FSDirectory#inodeMap when replacing an INodeDirectory 
while setting quota.  Contributed by Jing Zhao (Revision 1476005)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1476005
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestINodeFile.java


> Update FSDirectory#inodeMap when replacing an INodeDirectory while setting 
> quota
> 
>
> Key: HDFS-4757
> URL: https://issues.apache.org/jira/browse/HDFS-4757
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-4757.001.patch
>
>
> When setting quota to a directory, we may need to replace the original 
> directory node with a new node with the same id. We need to update the 
> inodeMap after the node replacement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-26 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642887#comment-13642887
 ] 

Daryn Sharp commented on HDFS-4489:
---

I don't think Nathan and I are questioning the utility of the feature, but need 
to get a feel for the possible performance impact.  _If_ there is a significant 
degradation then it will delay our adoption of 2.x until it's optimized.

I think a good performance test is to create a namespace of 150M paths.  Flood 
the NN with thousands of concurrent file & directory add/deletes per second 
throughout the namespace.  Hopefully there is existing benchmark with those 
properties.

> Use InodeID as as an identifier of a file in HDFS protocols and APIs
> 
>
> Key: HDFS-4489
> URL: https://issues.apache.org/jira/browse/HDFS-4489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.0.5-beta
>
>
> The benefit of using InodeID to uniquely identify a file can be multiple 
> folds. Here are a few of them:
> 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
> HDFS-4437.
> 2. modification checks in tools like distcp. Since a file could have been 
> replaced or renamed to, the file name and size combination is no t reliable, 
> but the combination of file id and size is unique.
> 3. id based protocol support (e.g., NFS)
> 4. to make the pluggable block placement policy use fileid instead of 
> filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642895#comment-13642895
 ] 

Suresh Srinivas commented on HDFS-4489:
---

bq. I think a good performance test is to create a namespace of 150M paths. 
Flood the NN with thousands of concurrent file & directory add/deletes per 
second throughout the namespace. Hopefully there is existing benchmark with 
those properties.
I think we are talking about hashmap entry addition and deletion during adds 
and delete of files, other than increased memory. I am not sure I understand 
the cache pollution part of performance impact, given namenode core objects run 
into GBs in a large setup.

I am currently running some slive tests. But I do not currently have bandwidth 
to setup a namenode with 150M paths (that would require more than 64GB of JVM 
heap). Do you have some bandwidth to do these tests? 

> Use InodeID as as an identifier of a file in HDFS protocols and APIs
> 
>
> Key: HDFS-4489
> URL: https://issues.apache.org/jira/browse/HDFS-4489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.0.5-beta
>
>
> The benefit of using InodeID to uniquely identify a file can be multiple 
> folds. Here are a few of them:
> 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
> HDFS-4437.
> 2. modification checks in tools like distcp. Since a file could have been 
> replaced or renamed to, the file name and size combination is no t reliable, 
> but the combination of file id and size is unique.
> 3. id based protocol support (e.g., NFS)
> 4. to make the pluggable block placement policy use fileid instead of 
> filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-26 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642907#comment-13642907
 ] 

Konstantin Shvachko commented on HDFS-4489:
---

Suresh,
0.20 is not  typo. You should parse it as a sarcasm, sorry. Wire compatibility 
was a target for many previous releases and the train is still there.
We clearly have a disagreement about what should be in the release. Other 
people may have other opinions. And that is my point.
All I ask is to play by the rules. Make a release plan and put it into vote. 
See bylaws under "Release Plan". I'll be glad to discuss your plan.
Here you act like its your own branch where you commit what you want and nobody 
else cares.
Does it make sense?

> Use InodeID as as an identifier of a file in HDFS protocols and APIs
> 
>
> Key: HDFS-4489
> URL: https://issues.apache.org/jira/browse/HDFS-4489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.0.5-beta
>
>
> The benefit of using InodeID to uniquely identify a file can be multiple 
> folds. Here are a few of them:
> 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
> HDFS-4437.
> 2. modification checks in tools like distcp. Since a file could have been 
> replaced or renamed to, the file name and size combination is no t reliable, 
> but the combination of file id and size is unique.
> 3. id based protocol support (e.g., NFS)
> 4. to make the pluggable block placement policy use fileid instead of 
> filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642996#comment-13642996
 ] 

Suresh Srinivas commented on HDFS-4489:
---

bq. Here you act like its your own branch where you commit what you want and 
nobody else cares.
I fail to understand the need for such hostile tone. That said, please look at 
many small features, improvements and numerous bug fixes that are committed by 
me and other committers. Also instead of stating your objection to a change as 
it is big, 150K lines of code etc., it would be great if you can really look at 
the patch and express more concrete technical concerns related to stability.

I have reverted HDFS-4434. I have also responded on the thread related to 2.0.5 
on including the features that many have been working for many months.

It seems to me that suddenly in past week or so you have decided that stability 
is the only paramount thing, disregarding all the discussions that have 
happened. Please see my earlier comment on discussion related to API and wire 
protocol stability that we sent months ago.

> Use InodeID as as an identifier of a file in HDFS protocols and APIs
> 
>
> Key: HDFS-4489
> URL: https://issues.apache.org/jira/browse/HDFS-4489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.0.5-beta
>
>
> The benefit of using InodeID to uniquely identify a file can be multiple 
> folds. Here are a few of them:
> 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
> HDFS-4437.
> 2. modification checks in tools like distcp. Since a file could have been 
> replaced or renamed to, the file name and size combination is no t reliable, 
> but the combination of file id and size is unique.
> 3. id based protocol support (e.g., NFS)
> 4. to make the pluggable block placement policy use fileid instead of 
> filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13642996#comment-13642996
 ] 

Suresh Srinivas edited comment on HDFS-4489 at 4/26/13 4:36 PM:


bq. Here you act like its your own branch where you commit what you want and 
nobody else cares.
I fail to understand the need for such hostile tone. That said, please look at 
many small features, improvements and numerous bug fixes that are committed by 
me and other committers into many of the 2.0.x releases, without any discussion 
or need for vote, entirely based on their judgement. 

To be clear, a committer can commit to any branch. It is up to the release 
manager to include it or not in a release.

Instead of stating your objection to a change as it is big, 150K lines of code 
etc., it would be great if you can really look at the patch and express more 
concrete technical concerns related to stability.

I have reverted HDFS-4434. I have also responded on the thread related to 2.0.5 
on including the features that many have been working for many months.

It seems to me that suddenly in past week or so you have decided that stability 
is the only paramount thing, disregarding all the discussions that have 
happened. Please see my earlier comment on discussion related to API and wire 
protocol stability that we had months ago.

  was (Author: sureshms):
bq. Here you act like its your own branch where you commit what you want 
and nobody else cares.
I fail to understand the need for such hostile tone. That said, please look at 
many small features, improvements and numerous bug fixes that are committed by 
me and other committers. Also instead of stating your objection to a change as 
it is big, 150K lines of code etc., it would be great if you can really look at 
the patch and express more concrete technical concerns related to stability.

I have reverted HDFS-4434. I have also responded on the thread related to 2.0.5 
on including the features that many have been working for many months.

It seems to me that suddenly in past week or so you have decided that stability 
is the only paramount thing, disregarding all the discussions that have 
happened. Please see my earlier comment on discussion related to API and wire 
protocol stability that we sent months ago.
  
> Use InodeID as as an identifier of a file in HDFS protocols and APIs
> 
>
> Key: HDFS-4489
> URL: https://issues.apache.org/jira/browse/HDFS-4489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.0.5-beta
>
>
> The benefit of using InodeID to uniquely identify a file can be multiple 
> folds. Here are a few of them:
> 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
> HDFS-4437.
> 2. modification checks in tools like distcp. Since a file could have been 
> replaced or renamed to, the file name and size combination is no t reliable, 
> but the combination of file id and size is unique.
> 3. id based protocol support (e.g., NFS)
> 4. to make the pluggable block placement policy use fileid instead of 
> filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4712) New libhdfs method hdfsGetDataNodes

2013-04-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643010#comment-13643010
 ] 

Colin Patrick McCabe commented on HDFS-4712:


Hi Andrea,

Thanks for posting the code.  It would be better if you posted it in "diff" 
format.

See the section here on "generating a patch" for one possible way to do it.  
http://wiki.apache.org/hadoop/HowToContribute
(If you're familiar with the diff tool, you can also use that tool to generate 
patches.)

I see that you used 8-space indents in a few cases.  Try to match with the 
surrounding code instead, so that it can be consistent.

Let's create an enum called {{dataNodeReportType}} (or something similar) with 
ALL, DEAD, or LIVE as members.  That way we don't have to mess around with 
strings.  We can always add more enum members later if we need to without 
breaking the existing ABI.

{code}
typedef struct  {
long capacity;/* The raw capacity */
long dfsUsed; /* The used space by the data node*/
...
} hdfsDataNodeInfo;
{code}

This makes it impossible to do forward declarations.  I recommend this instead:
{code}
typedef struct hdfsDataNodeInfo_s {
long capacity;/* The raw capacity */
long dfsUsed; /* The used space by the data node*/
...
} hdfsDataNodeInfo;
{code}

{code}
hdfsDataNodeInfo *hdfsGetDataNodeInfo(hdfsFS fs, const char * 
dataNodeType,int* numEntries);
void hdfsFreeDataNodeInfo(hdfsDataNodeInfo *hdfsDataNodeInfos, int 
numEntries);
{code}

I think the proposed API has some problems.  You are assuming that the client 
knows the exact size of {{struct hdfsDataNodeInfo}}.  This means that we cannot 
add fields to the end in the future, if the Java API gains some fields.

It would be better to have something like this:
{code}
enum dataNodeReportType {
DN_REPORT_ALL = 0,
DN_REPORT_DEAD,
DN_REPORT_LIVE
};
hdfsDataNodeInfo **hdfsGetDataNodeInfo(hdfsFS fs, enum dataNodeReportType 
dataNodeType,int* numEntries);
void hdfsFreeDataNodeInfo(hdfsDataNodeInfo **hdfsDataNodeInfos);
{code}

In this case, the library would allocate an array of pointers to 
{{hdfsDataNodeInfo*}}, and then do a separate allocation for each 
{{hdfsDataNodeInfo*}} structure.  That way, if we later need to add a foo and a 
bar field to the end of {{struct hdfsDataNodeInfo}}, we can do that without 
breaking existing code that uses the library.

It is customary to put a NULL pointer at the end of the array, so that 
{{hdfsFreeDataNodeInfo}} can just check for that rather than having to be told 
how long the array is.

> New libhdfs method hdfsGetDataNodes
> ---
>
> Key: HDFS-4712
> URL: https://issues.apache.org/jira/browse/HDFS-4712
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: libhdfs
>Reporter: andrea manzi
> Attachments: hdfs.c.diff, hdfs.h.diff
>
>
> we have implemented a possible extension to libhdfs to retrieve information 
> about the available datanodes ( there was a mail on the hadoop-hdsf-dev 
> mailing list initially abut this :
> http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-dev/201204.mbox/%3CCANhO-
> s0mvororrxpjnjbql6brkj4c7l+u816xkdc+2r0whj...@mail.gmail.com%3E)
> i would like to know how to proceed to create a patch, cause on the wiki 
> http://wiki.apache.org/hadoop/HowToContribute i can see info about JAVA 
> patches but nothing related to extensions in C.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4762) Provide HDFS based NFSv3 and Mountd implementation

2013-04-26 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4762:


 Summary: Provide HDFS based NFSv3 and Mountd implementation
 Key: HDFS-4762
 URL: https://issues.apache.org/jira/browse/HDFS-4762
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Brandon Li
Assignee: Brandon Li




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4763) Add script changes/utility for starting NFS gateway

2013-04-26 Thread Brandon Li (JIRA)
Brandon Li created HDFS-4763:


 Summary: Add script changes/utility for starting NFS gateway
 Key: HDFS-4763
 URL: https://issues.apache.org/jira/browse/HDFS-4763
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Brandon Li




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-26 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643025#comment-13643025
 ] 

Nathan Roberts commented on HDFS-4489:
--

bq. Suresh is willing to do the performance benchmark, but I am trying to 
understand where you are coming from. Yahoo and FB create very large namespaces 
by simply buying more memory and increasing the size of the heap. 

This is not always possible. Some of our namenodes are running at the maximum 
configuration for the box (maximum memory, maximum heap, near maximum 
namespace). For these clusters, upgrading to this feature will require new 
boxes. 

bq. Do you worry about cache pollution when you create 50K more files? 
I don't worry about cache pollution when I create 50K more files. What's 
important is the size of the working set. Inodes are a very popular object 
within the NN, if inodes make up a significant part of our working set, then it 
matters. I don't know whether this is the case or not, that's why I think it 
makes sense to run some benchmarks to make sure we don't see any ill-effects. 
With the introduction of YARN, the central RM is rarely the bottleneck. Now 
it's much more common for the NN to be the bottleneck of the cluster, and 
slowing down the bottleneck always needs to be looked at carefully.

bq. Given that the NN heap (many GBs) is so much larger than the cache, does 
the additional inode and inode-map size impact the overall system performance? 
Good question. Let's find out.

bq. Suresh has argued that a 24GB heap grows by 625MB. 
I was using the numbers Todd gathered where a 7G heap grew by 600MB. When we 
looked at one of our key clusters, we calculated something like 7.5% increase.

bq. Looking at the growth in memory of this feature as a percentage of the 
total heap size is a more realistic way of looking at the impact of the growth 
than the growth of an individual data structure like the inode.
Maybe.   


bq. IMHO, not having an inode-map and inode number was a serious limitation in 
the original implementation of NN. I am willing to pay for the extra memory 
given the value inode-id and inode-map brings (as described by suresh in the 
beginning of this Jira). Permissions, access time, etc added to the memory cost 
of the the NN and were accepted because of the value they bring. 
Certainly agree it is a limitation. We just need to make sure we fully quantify 
all of the costs.  


> Use InodeID as as an identifier of a file in HDFS protocols and APIs
> 
>
> Key: HDFS-4489
> URL: https://issues.apache.org/jira/browse/HDFS-4489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.0.5-beta
>
>
> The benefit of using InodeID to uniquely identify a file can be multiple 
> folds. Here are a few of them:
> 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
> HDFS-4437.
> 2. modification checks in tools like distcp. Since a file could have been 
> replaced or renamed to, the file name and size combination is no t reliable, 
> but the combination of file id and size is unique.
> 3. id based protocol support (e.g., NFS)
> 4. to make the pluggable block placement policy use fileid instead of 
> filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4721) Speed up lease/block recovery when DN fails and a block goes into recovery

2013-04-26 Thread Varun Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Sharma updated HDFS-4721:
---

Attachment: 4721-branch2.patch

> Speed up lease/block recovery when DN fails and a block goes into recovery
> --
>
> Key: HDFS-4721
> URL: https://issues.apache.org/jira/browse/HDFS-4721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Varun Sharma
> Fix For: 2.0.4-alpha
>
> Attachments: 4721-branch2.patch, 4721-trunk.patch, 
> 4721-trunk-v2.patch, 4721-trunk-v3.patch, 4721-trunk-v4.patch, 4721-v2.patch, 
> 4721-v3.patch, 4721-v4.patch, 4721-v5.patch, 4721-v6.patch, 4721-v7.patch, 
> 4721-v8.patch
>
>
> This was observed while doing HBase WAL recovery. HBase uses append to write 
> to its write ahead log. So initially the pipeline is setup as
> DN1 --> DN2 --> DN3
> This WAL needs to be read when DN1 fails since it houses the HBase 
> regionserver for the WAL.
> HBase first recovers the lease on the WAL file. During recovery, we choose 
> DN1 as the primary DN to do the recovery even though DN1 has failed and is 
> not heartbeating any more.
> Avoiding the stale DN1 would speed up recovery and reduce hbase MTTR. There 
> are two options.
> a) Ride on HDFS 3703 and if stale node detection is turned on, we do not 
> choose stale datanodes (typically not heart beated for 20-30 seconds) as 
> primary DN(s)
> b) We sort the replicas in order of last heart beat and always pick the ones 
> which gave the most recent heart beat
> Going to the dead datanode increases lease + block recovery since the block 
> goes into UNDER_RECOVERY state even though no one is recovering it actively. 
> Please let me know if this makes sense. If yes, whether we should move 
> forward with a) or b).
> Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4721) Speed up lease/block recovery when DN fails and a block goes into recovery

2013-04-26 Thread Varun Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Sharma updated HDFS-4721:
---

Attachment: 4721-trunk-v4.patch

> Speed up lease/block recovery when DN fails and a block goes into recovery
> --
>
> Key: HDFS-4721
> URL: https://issues.apache.org/jira/browse/HDFS-4721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Varun Sharma
> Fix For: 2.0.4-alpha
>
> Attachments: 4721-branch2.patch, 4721-trunk.patch, 
> 4721-trunk-v2.patch, 4721-trunk-v3.patch, 4721-trunk-v4.patch, 4721-v2.patch, 
> 4721-v3.patch, 4721-v4.patch, 4721-v5.patch, 4721-v6.patch, 4721-v7.patch, 
> 4721-v8.patch
>
>
> This was observed while doing HBase WAL recovery. HBase uses append to write 
> to its write ahead log. So initially the pipeline is setup as
> DN1 --> DN2 --> DN3
> This WAL needs to be read when DN1 fails since it houses the HBase 
> regionserver for the WAL.
> HBase first recovers the lease on the WAL file. During recovery, we choose 
> DN1 as the primary DN to do the recovery even though DN1 has failed and is 
> not heartbeating any more.
> Avoiding the stale DN1 would speed up recovery and reduce hbase MTTR. There 
> are two options.
> a) Ride on HDFS 3703 and if stale node detection is turned on, we do not 
> choose stale datanodes (typically not heart beated for 20-30 seconds) as 
> primary DN(s)
> b) We sort the replicas in order of last heart beat and always pick the ones 
> which gave the most recent heart beat
> Going to the dead datanode increases lease + block recovery since the block 
> goes into UNDER_RECOVERY state even though no one is recovering it actively. 
> Please let me know if this makes sense. If yes, whether we should move 
> forward with a) or b).
> Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4721) Speed up lease/block recovery when DN fails and a block goes into recovery

2013-04-26 Thread Varun Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Sharma updated HDFS-4721:
---

Attachment: (was: 4721-hadoop2.patch)

> Speed up lease/block recovery when DN fails and a block goes into recovery
> --
>
> Key: HDFS-4721
> URL: https://issues.apache.org/jira/browse/HDFS-4721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Varun Sharma
> Fix For: 2.0.4-alpha
>
> Attachments: 4721-branch2.patch, 4721-trunk.patch, 
> 4721-trunk-v2.patch, 4721-trunk-v3.patch, 4721-trunk-v4.patch, 4721-v2.patch, 
> 4721-v3.patch, 4721-v4.patch, 4721-v5.patch, 4721-v6.patch, 4721-v7.patch, 
> 4721-v8.patch
>
>
> This was observed while doing HBase WAL recovery. HBase uses append to write 
> to its write ahead log. So initially the pipeline is setup as
> DN1 --> DN2 --> DN3
> This WAL needs to be read when DN1 fails since it houses the HBase 
> regionserver for the WAL.
> HBase first recovers the lease on the WAL file. During recovery, we choose 
> DN1 as the primary DN to do the recovery even though DN1 has failed and is 
> not heartbeating any more.
> Avoiding the stale DN1 would speed up recovery and reduce hbase MTTR. There 
> are two options.
> a) Ride on HDFS 3703 and if stale node detection is turned on, we do not 
> choose stale datanodes (typically not heart beated for 20-30 seconds) as 
> primary DN(s)
> b) We sort the replicas in order of last heart beat and always pick the ones 
> which gave the most recent heart beat
> Going to the dead datanode increases lease + block recovery since the block 
> goes into UNDER_RECOVERY state even though no one is recovering it actively. 
> Please let me know if this makes sense. If yes, whether we should move 
> forward with a) or b).
> Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643043#comment-13643043
 ] 

Brandon Li commented on HDFS-4750:
--

Hi folks,

I plan to split and upload the initial implementation to 4 JIRAs 
(HADOOP-9509,HADOOP-9515,HDFS-4762,HDFS-4763). These changes are independent 
with current Hadoop code base. But, if it's preferred to do the change in a 
different branch, please let me know.

Thanks,
Brandon


> Support NFSv3 interface to HDFS
> ---
>
> Key: HDFS-4750
> URL: https://issues.apache.org/jira/browse/HDFS-4750
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HADOOP-NFS-Proposal.pdf
>
>
> Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
> integration with client’s file system makes it difficult for users and 
> impossible for some applications to access HDFS. NFS interface support is one 
> way for HDFS to have such easy integration.
> This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
> client, webHDFS and the NFS interface, HDFS will be easier to access and be 
> able support more applications and use cases. 
> We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4763) Add script changes/utility for starting NFS gateway

2013-04-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4763:
-

Assignee: Brandon Li

> Add script changes/utility for starting NFS gateway
> ---
>
> Key: HDFS-4763
> URL: https://issues.apache.org/jira/browse/HDFS-4763
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Brandon Li
>Assignee: Brandon Li
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4761) Refresh INodeMap in FSDirectory#reset()

2013-04-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4761:


Status: Patch Available  (was: Open)

> Refresh INodeMap in FSDirectory#reset()
> ---
>
> Key: HDFS-4761
> URL: https://issues.apache.org/jira/browse/HDFS-4761
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-4761.001.patch
>
>
> When resetting FSDirectory, the inodeMap should also be reset. I.e., we 
> should clear the inodeMap and then put in the new root node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4751) TestLeaseRenewer#testThreadName flakes

2013-04-26 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4751:
--

Status: Patch Available  (was: Open)

> TestLeaseRenewer#testThreadName flakes
> --
>
> Key: HDFS-4751
> URL: https://issues.apache.org/jira/browse/HDFS-4751
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.0.5-beta
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hdfs-4751-1.patch
>
>
> Seen internally and during upstream trunk builds, error like the following:
> {noformat}
> Error Message:
>  Unfinished stubbing detected here: -> at 
> org.apache.hadoop.hdfs.TestLeaseRenewer.testThreadName(TestLeaseRenewer.java:197)
>   E.g. thenReturn() may be missing. Examples of correct stubbing: 
> when(mock.isOk()).thenReturn(true); 
> when(mock.isOk()).thenThrow(exception); 
> doThrow(exception).when(mock).someVoidMethod(); Hints:  1. missing 
> thenReturn()  2. although stubbed methods may return mocks, you cannot inline 
> mock creation (mock()) call inside a thenReturn method (see issue 53)
> Stack Trace:
> org.mockito.exceptions.misusing.UnfinishedStubbingException:
> Unfinished stubbing detected here:
> -> at 
> org.apache.hadoop.hdfs.TestLeaseRenewer.testThreadName(TestLeaseRenewer.java:197)
> {noformat}
> I believe it's due to mocking while being concurrently accessed by another 
> thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4659) Support setting execution bit for regular files

2013-04-26 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HDFS-4659:
---

Status: Open  (was: Patch Available)

> Support setting execution bit for regular files
> ---
>
> Key: HDFS-4659
> URL: https://issues.apache.org/jira/browse/HDFS-4659
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4659.patch, HDFS-4659.patch, HDFS-4659.patch, 
> HDFS-4659.patch
>
>
> By default regular files are created with mode "rw-r--r--", which is similar 
> as that on many UNIX platforms. However, setting execution bit for regular 
> files are not supported by HDFS. 
> It's the client's choice to set file access mode. HDFS would be easier to use 
> if it can support it, especially when HDFS is accessed by network file system 
> protocols. This JIRA is to track the change to support execution bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4659) Support setting execution bit for regular files

2013-04-26 Thread Sanjay Radia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjay Radia updated HDFS-4659:
---

Status: Patch Available  (was: Open)

> Support setting execution bit for regular files
> ---
>
> Key: HDFS-4659
> URL: https://issues.apache.org/jira/browse/HDFS-4659
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4659.patch, HDFS-4659.patch, HDFS-4659.patch, 
> HDFS-4659.patch
>
>
> By default regular files are created with mode "rw-r--r--", which is similar 
> as that on many UNIX platforms. However, setting execution bit for regular 
> files are not supported by HDFS. 
> It's the client's choice to set file access mode. HDFS would be easier to use 
> if it can support it, especially when HDFS is accessed by network file system 
> protocols. This JIRA is to track the change to support execution bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4761) Refresh INodeMap in FSDirectory#reset()

2013-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643135#comment-13643135
 ] 

Hadoop QA commented on HDFS-4761:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580675/HDFS-4761.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4324//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4324//console

This message is automatically generated.

> Refresh INodeMap in FSDirectory#reset()
> ---
>
> Key: HDFS-4761
> URL: https://issues.apache.org/jira/browse/HDFS-4761
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-4761.001.patch
>
>
> When resetting FSDirectory, the inodeMap should also be reset. I.e., we 
> should clear the inodeMap and then put in the new root node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4721) Speed up lease/block recovery when DN fails and a block goes into recovery

2013-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643139#comment-13643139
 ] 

Hadoop QA commented on HDFS-4721:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580722/4721-trunk-v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4325//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4325//console

This message is automatically generated.

> Speed up lease/block recovery when DN fails and a block goes into recovery
> --
>
> Key: HDFS-4721
> URL: https://issues.apache.org/jira/browse/HDFS-4721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Varun Sharma
> Fix For: 2.0.4-alpha
>
> Attachments: 4721-branch2.patch, 4721-trunk.patch, 
> 4721-trunk-v2.patch, 4721-trunk-v3.patch, 4721-trunk-v4.patch, 4721-v2.patch, 
> 4721-v3.patch, 4721-v4.patch, 4721-v5.patch, 4721-v6.patch, 4721-v7.patch, 
> 4721-v8.patch
>
>
> This was observed while doing HBase WAL recovery. HBase uses append to write 
> to its write ahead log. So initially the pipeline is setup as
> DN1 --> DN2 --> DN3
> This WAL needs to be read when DN1 fails since it houses the HBase 
> regionserver for the WAL.
> HBase first recovers the lease on the WAL file. During recovery, we choose 
> DN1 as the primary DN to do the recovery even though DN1 has failed and is 
> not heartbeating any more.
> Avoiding the stale DN1 would speed up recovery and reduce hbase MTTR. There 
> are two options.
> a) Ride on HDFS 3703 and if stale node detection is turned on, we do not 
> choose stale datanodes (typically not heart beated for 20-30 seconds) as 
> primary DN(s)
> b) We sort the replicas in order of last heart beat and always pick the ones 
> which gave the most recent heart beat
> Going to the dead datanode increases lease + block recovery since the block 
> goes into UNDER_RECOVERY state even though no one is recovering it actively. 
> Please let me know if this makes sense. If yes, whether we should move 
> forward with a) or b).
> Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2576:
-

Component/s: namenode
 hdfs-client

+1 the new patch looks good.

> Namenode should have a favored nodes hint to enable clients to have control 
> over block placement.
> -
>
> Key: HDFS-2576
> URL: https://issues.apache.org/jira/browse/HDFS-2576
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client, namenode
>Reporter: Pritam Damania
>Assignee: Devaraj Das
> Fix For: 2.0.5-beta
>
> Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch, 
> hdfs-2576-trunk-2.patch, hdfs-2576-trunk-7.1.patch, hdfs-2576-trunk-7.patch, 
> hdfs-2576-trunk-8.1.patch, hdfs-2576-trunk-8.2.patch, 
> hdfs-2576-trunk-8.3.patch, hdfs-2576-trunk-8.patch
>
>
> Sometimes Clients like HBase are required to dynamically compute the 
> datanodes it wishes to place the blocks for a file for higher level of 
> locality. For this purpose there is a need of a way to give the Namenode a 
> hint in terms of a favoredNodes parameter about the locations where the 
> client wants to put each block. The proposed solution is a favored nodes 
> parameter in the addBlock() method and in the create() file method to enable 
> the clients to give the hints to the NameNode about the locations of each 
> replica of the block. Note that this would be just a hint and finally the 
> NameNode would look at disk usage, datanode load etc. and decide whether it 
> can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4721) Speed up lease/block recovery when DN fails and a block goes into recovery

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4721:
-

Fix Version/s: (was: 2.0.4-alpha)
 Assignee: Varun Sharma
 Hadoop Flags: Reviewed

+1 patch looks good.

> Speed up lease/block recovery when DN fails and a block goes into recovery
> --
>
> Key: HDFS-4721
> URL: https://issues.apache.org/jira/browse/HDFS-4721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Varun Sharma
>Assignee: Varun Sharma
> Attachments: 4721-branch2.patch, 4721-trunk.patch, 
> 4721-trunk-v2.patch, 4721-trunk-v3.patch, 4721-trunk-v4.patch, 4721-v2.patch, 
> 4721-v3.patch, 4721-v4.patch, 4721-v5.patch, 4721-v6.patch, 4721-v7.patch, 
> 4721-v8.patch
>
>
> This was observed while doing HBase WAL recovery. HBase uses append to write 
> to its write ahead log. So initially the pipeline is setup as
> DN1 --> DN2 --> DN3
> This WAL needs to be read when DN1 fails since it houses the HBase 
> regionserver for the WAL.
> HBase first recovers the lease on the WAL file. During recovery, we choose 
> DN1 as the primary DN to do the recovery even though DN1 has failed and is 
> not heartbeating any more.
> Avoiding the stale DN1 would speed up recovery and reduce hbase MTTR. There 
> are two options.
> a) Ride on HDFS 3703 and if stale node detection is turned on, we do not 
> choose stale datanodes (typically not heart beated for 20-30 seconds) as 
> primary DN(s)
> b) We sort the replicas in order of last heart beat and always pick the ones 
> which gave the most recent heart beat
> Going to the dead datanode increases lease + block recovery since the block 
> goes into UNDER_RECOVERY state even though no one is recovering it actively. 
> Please let me know if this makes sense. If yes, whether we should move 
> forward with a) or b).
> Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4751) TestLeaseRenewer#testThreadName flakes

2013-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643205#comment-13643205
 ] 

Hadoop QA commented on HDFS-4751:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580448/hdfs-4751-1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestBlockReaderLocalLegacy

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4326//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4326//console

This message is automatically generated.

> TestLeaseRenewer#testThreadName flakes
> --
>
> Key: HDFS-4751
> URL: https://issues.apache.org/jira/browse/HDFS-4751
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.0.5-beta
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hdfs-4751-1.patch
>
>
> Seen internally and during upstream trunk builds, error like the following:
> {noformat}
> Error Message:
>  Unfinished stubbing detected here: -> at 
> org.apache.hadoop.hdfs.TestLeaseRenewer.testThreadName(TestLeaseRenewer.java:197)
>   E.g. thenReturn() may be missing. Examples of correct stubbing: 
> when(mock.isOk()).thenReturn(true); 
> when(mock.isOk()).thenThrow(exception); 
> doThrow(exception).when(mock).someVoidMethod(); Hints:  1. missing 
> thenReturn()  2. although stubbed methods may return mocks, you cannot inline 
> mock creation (mock()) call inside a thenReturn method (see issue 53)
> Stack Trace:
> org.mockito.exceptions.misusing.UnfinishedStubbingException:
> Unfinished stubbing detected here:
> -> at 
> org.apache.hadoop.hdfs.TestLeaseRenewer.testThreadName(TestLeaseRenewer.java:197)
> {noformat}
> I believe it's due to mocking while being concurrently accessed by another 
> thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4751) TestLeaseRenewer#testThreadName flakes

2013-04-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643214#comment-13643214
 ] 

Andrew Wang commented on HDFS-4751:
---

The test failure is unrelated. Has flaked on other builds too.

> TestLeaseRenewer#testThreadName flakes
> --
>
> Key: HDFS-4751
> URL: https://issues.apache.org/jira/browse/HDFS-4751
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.0.5-beta
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hdfs-4751-1.patch
>
>
> Seen internally and during upstream trunk builds, error like the following:
> {noformat}
> Error Message:
>  Unfinished stubbing detected here: -> at 
> org.apache.hadoop.hdfs.TestLeaseRenewer.testThreadName(TestLeaseRenewer.java:197)
>   E.g. thenReturn() may be missing. Examples of correct stubbing: 
> when(mock.isOk()).thenReturn(true); 
> when(mock.isOk()).thenThrow(exception); 
> doThrow(exception).when(mock).someVoidMethod(); Hints:  1. missing 
> thenReturn()  2. although stubbed methods may return mocks, you cannot inline 
> mock creation (mock()) call inside a thenReturn method (see issue 53)
> Stack Trace:
> org.mockito.exceptions.misusing.UnfinishedStubbingException:
> Unfinished stubbing detected here:
> -> at 
> org.apache.hadoop.hdfs.TestLeaseRenewer.testThreadName(TestLeaseRenewer.java:197)
> {noformat}
> I believe it's due to mocking while being concurrently accessed by another 
> thread.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4764) TestBlockReaderLocalLegacy flakes in MiniDFSCluster#shutdown

2013-04-26 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-4764:
-

 Summary: TestBlockReaderLocalLegacy flakes in 
MiniDFSCluster#shutdown
 Key: HDFS-4764
 URL: https://issues.apache.org/jira/browse/HDFS-4764
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Andrew Wang


I've seen this fail on two test-patch runs, and I'm pretty sure it's unrelated.

{noformat}
Error Message

Test resulted in an unexpected exit
Stacktrace

java.lang.AssertionError: Test resulted in an unexpected exit
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1416)
at 
org.apache.hadoop.hdfs.TestBlockReaderLocalLegacy.testBothOldAndNewShortCircuitConfigured(TestBlockReaderLocalLegacy.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643230#comment-13643230
 ] 

Hudson commented on HDFS-2576:
--

Integrated in Hadoop-trunk-Commit #3672 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3672/])
HDFS-2576. Enhances the DistributedFileSystem's create API so that clients 
can specify favored datanodes for a file's blocks. Contributed by Devaraj Das 
and Pritam Damania. (Revision 1476395)

 Result = SUCCESS
ddas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1476395
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java


> Namenode should have a favored nodes hint to enable clients to have control 
> over block placement.
> -
>
> Key: HDFS-2576
> URL: https://issues.apache.org/jira/browse/HDFS-2576
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client, namenode
>Reporter: Pritam Damania
>Assignee: Devaraj Das
> Fix For: 2.0.5-beta
>
> Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch, 
> hdfs-2576-trunk-2.patch, hdfs-2576-trunk-7.1.patch, hdfs-2576-trunk-7.patch, 
> hdfs-2576-trunk-8.1.patch, hdfs-2576-trunk-8.2.patch, 
> hdfs-2576-trunk-8.3.patch, hdfs-2576-trunk-8.patch
>
>
> Sometimes Clients like HBase are required to dynamically compute the 
> datanodes it wishes to place the blocks for a file for higher level of 
> locality. For this purpose there is a need of a way to give the Namenode a 
> hint in terms of a favoredNodes parameter about the locations where the 
> client wants to put each block. The proposed solution is a favored nodes 
> parameter in the addBlock() method and in the create() file method to enable 
> the clients to give the hints to the NameNode about the locations of each 
> replica of the block. Note that this would be just a hint and finally the 
> NameNode would look at disk usage, datanode load etc. and decide whether it 
> can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-04-26 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643235#comment-13643235
 ] 

Devaraj Das commented on HDFS-2576:
---

Thanks Pritam for the work on the fb branch from which the trunk patch was 
derived. Thanks, everyone for the review of the patches.

I'll submit a patch for branch-1 shortly.

> Namenode should have a favored nodes hint to enable clients to have control 
> over block placement.
> -
>
> Key: HDFS-2576
> URL: https://issues.apache.org/jira/browse/HDFS-2576
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client, namenode
>Reporter: Pritam Damania
>Assignee: Devaraj Das
> Fix For: 2.0.5-beta
>
> Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch, 
> hdfs-2576-trunk-2.patch, hdfs-2576-trunk-7.1.patch, hdfs-2576-trunk-7.patch, 
> hdfs-2576-trunk-8.1.patch, hdfs-2576-trunk-8.2.patch, 
> hdfs-2576-trunk-8.3.patch, hdfs-2576-trunk-8.patch
>
>
> Sometimes Clients like HBase are required to dynamically compute the 
> datanodes it wishes to place the blocks for a file for higher level of 
> locality. For this purpose there is a need of a way to give the Namenode a 
> hint in terms of a favoredNodes parameter about the locations where the 
> client wants to put each block. The proposed solution is a favored nodes 
> parameter in the addBlock() method and in the create() file method to enable 
> the clients to give the hints to the NameNode about the locations of each 
> replica of the block. Note that this would be just a hint and finally the 
> NameNode would look at disk usage, datanode load etc. and decide whether it 
> can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-04-26 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643236#comment-13643236
 ] 

Devaraj Das commented on HDFS-2576:
---

Forgot to mention that I committed the patch in trunk.

> Namenode should have a favored nodes hint to enable clients to have control 
> over block placement.
> -
>
> Key: HDFS-2576
> URL: https://issues.apache.org/jira/browse/HDFS-2576
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs-client, namenode
>Reporter: Pritam Damania
>Assignee: Devaraj Das
> Fix For: 2.0.5-beta
>
> Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch, 
> hdfs-2576-trunk-2.patch, hdfs-2576-trunk-7.1.patch, hdfs-2576-trunk-7.patch, 
> hdfs-2576-trunk-8.1.patch, hdfs-2576-trunk-8.2.patch, 
> hdfs-2576-trunk-8.3.patch, hdfs-2576-trunk-8.patch
>
>
> Sometimes Clients like HBase are required to dynamically compute the 
> datanodes it wishes to place the blocks for a file for higher level of 
> locality. For this purpose there is a need of a way to give the Namenode a 
> hint in terms of a favoredNodes parameter about the locations where the 
> client wants to put each block. The proposed solution is a favored nodes 
> parameter in the addBlock() method and in the create() file method to enable 
> the clients to give the hints to the NameNode about the locations of each 
> replica of the block. Note that this would be just a hint and finally the 
> NameNode would look at disk usage, datanode load etc. and decide whether it 
> can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4721) Speed up lease/block recovery when DN fails and a block goes into recovery

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4721:
-

   Resolution: Fixed
Fix Version/s: 2.0.5-beta
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Varun!

> Speed up lease/block recovery when DN fails and a block goes into recovery
> --
>
> Key: HDFS-4721
> URL: https://issues.apache.org/jira/browse/HDFS-4721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Varun Sharma
>Assignee: Varun Sharma
> Fix For: 2.0.5-beta
>
> Attachments: 4721-branch2.patch, 4721-trunk.patch, 
> 4721-trunk-v2.patch, 4721-trunk-v3.patch, 4721-trunk-v4.patch, 4721-v2.patch, 
> 4721-v3.patch, 4721-v4.patch, 4721-v5.patch, 4721-v6.patch, 4721-v7.patch, 
> 4721-v8.patch
>
>
> This was observed while doing HBase WAL recovery. HBase uses append to write 
> to its write ahead log. So initially the pipeline is setup as
> DN1 --> DN2 --> DN3
> This WAL needs to be read when DN1 fails since it houses the HBase 
> regionserver for the WAL.
> HBase first recovers the lease on the WAL file. During recovery, we choose 
> DN1 as the primary DN to do the recovery even though DN1 has failed and is 
> not heartbeating any more.
> Avoiding the stale DN1 would speed up recovery and reduce hbase MTTR. There 
> are two options.
> a) Ride on HDFS 3703 and if stale node detection is turned on, we do not 
> choose stale datanodes (typically not heart beated for 20-30 seconds) as 
> primary DN(s)
> b) We sort the replicas in order of last heart beat and always pick the ones 
> which gave the most recent heart beat
> Going to the dead datanode increases lease + block recovery since the block 
> goes into UNDER_RECOVERY state even though no one is recovering it actively. 
> Please let me know if this makes sense. If yes, whether we should move 
> forward with a) or b).
> Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4721) Speed up lease/block recovery when DN fails and a block goes into recovery

2013-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643246#comment-13643246
 ] 

Hudson commented on HDFS-4721:
--

Integrated in Hadoop-trunk-Commit #3673 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3673/])
HDFS-4721. Speed up lease recovery by avoiding stale datanodes and choosing 
the datanode with the most recent heartbeat as the primary.  Contributed by 
Varun Sharma (Revision 1476399)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1476399
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfoUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHeartbeatHandling.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java


> Speed up lease/block recovery when DN fails and a block goes into recovery
> --
>
> Key: HDFS-4721
> URL: https://issues.apache.org/jira/browse/HDFS-4721
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.0.3-alpha
>Reporter: Varun Sharma
>Assignee: Varun Sharma
> Fix For: 2.0.5-beta
>
> Attachments: 4721-branch2.patch, 4721-trunk.patch, 
> 4721-trunk-v2.patch, 4721-trunk-v3.patch, 4721-trunk-v4.patch, 4721-v2.patch, 
> 4721-v3.patch, 4721-v4.patch, 4721-v5.patch, 4721-v6.patch, 4721-v7.patch, 
> 4721-v8.patch
>
>
> This was observed while doing HBase WAL recovery. HBase uses append to write 
> to its write ahead log. So initially the pipeline is setup as
> DN1 --> DN2 --> DN3
> This WAL needs to be read when DN1 fails since it houses the HBase 
> regionserver for the WAL.
> HBase first recovers the lease on the WAL file. During recovery, we choose 
> DN1 as the primary DN to do the recovery even though DN1 has failed and is 
> not heartbeating any more.
> Avoiding the stale DN1 would speed up recovery and reduce hbase MTTR. There 
> are two options.
> a) Ride on HDFS 3703 and if stale node detection is turned on, we do not 
> choose stale datanodes (typically not heart beated for 20-30 seconds) as 
> primary DN(s)
> b) We sort the replicas in order of last heart beat and always pick the ones 
> which gave the most recent heart beat
> Going to the dead datanode increases lease + block recovery since the block 
> goes into UNDER_RECOVERY state even though no one is recovering it actively. 
> Please let me know if this makes sense. If yes, whether we should move 
> forward with a) or b).
> Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4765) Permission check of symlink deletion incorrectly throws UnresolvedLinkException

2013-04-26 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-4765:
-

 Summary: Permission check of symlink deletion incorrectly throws 
UnresolvedLinkException
 Key: HDFS-4765
 URL: https://issues.apache.org/jira/browse/HDFS-4765
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.0.3-alpha, 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang


With permissions enabled, the permission check in {{FSNamesystem#delete}} will 
incorrectly throw an UnresolvedLinkException if the path contains a symlink. 
This leads to FileContext resolving the symlink and instead deleting the link 
target.

The correct check is to see if the user has write permissions on the parent 
directory of the symlink, e.g.

{noformat}
-> % ls -ld symtest
drwxr-xr-x 2 root root 4096 Apr 26 14:12 symtest
-> % ls -l symtest
total 12
lrwxrwxrwx 1 root root 6 Apr 26 14:12 link -> target
-rw-r--r-- 1 root root 0 Apr 26 14:11 target
-> % rm -f symtest/link
rm: cannot remove `symtest/link': Permission denied
-> % sudo chown andrew symtest
-> % rm -f symtest/link   
-> % 
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4759) snapshotDiff of two invalid snapshots but with same name returns success

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4759:
-

 Component/s: (was: datanode)
Priority: Minor  (was: Major)
Hadoop Flags: Reviewed

+1 patch looks good.

> snapshotDiff of two invalid snapshots but with same name returns success
> 
>
> Key: HDFS-4759
> URL: https://issues.apache.org/jira/browse/HDFS-4759
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Ramya Sunil
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4759.001.patch
>
>
> snapshotDiff of two invalid snapshots which have the same names returns a 
> success.
> $ hadoop dfs -ls /user/foo/hdfs-snapshots/.snapshot
> Found 1 items
> drwx--   - foo foo  0 2013-04-26 00:53 
> /user/foo/hdfs-snapshots/.snapshot/s1
> $ hadoop snapshotDiff /user/foo/hdfs-snapshots invalid invalid 
> Difference between snapshot invalid and snapshot invalid under directory 
> /user/foo/hdfs-snapshots:
> -bash-4.1$ echo $?
> 0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4766) Enable NN spnego filters only if kerberos is enabled

2013-04-26 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-4766:
-

 Summary: Enable NN spnego filters only if kerberos is enabled
 Key: HDFS-4766
 URL: https://issues.apache.org/jira/browse/HDFS-4766
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, security
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


Counter-part to HADOOP-8779 to only enable SPNEGO if kerberos is enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4766) Enable NN spnego filters only if kerberos is enabled

2013-04-26 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4766:
--

Description: Counter-part to HADOOP-9516 to only enable SPNEGO if kerberos 
is enabled.  (was: Counter-part to HADOOP-8779 to only enable SPNEGO if 
kerberos is enabled.)

> Enable NN spnego filters only if kerberos is enabled
> 
>
> Key: HDFS-4766
> URL: https://issues.apache.org/jira/browse/HDFS-4766
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>
> Counter-part to HADOOP-9516 to only enable SPNEGO if kerberos is enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4766) Enable NN spnego filters only if kerberos is enabled

2013-04-26 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-4766:
--

Attachment: HDFS-4766.patch

> Enable NN spnego filters only if kerberos is enabled
> 
>
> Key: HDFS-4766
> URL: https://issues.apache.org/jira/browse/HDFS-4766
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-4766.patch
>
>
> Counter-part to HADOOP-9516 to only enable SPNEGO if kerberos is enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4759) snapshotDiff of two invalid snapshots but with same name returns success

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4759.
--

Resolution: Fixed

I have committed this.  Thanks, Jing!

> snapshotDiff of two invalid snapshots but with same name returns success
> 
>
> Key: HDFS-4759
> URL: https://issues.apache.org/jira/browse/HDFS-4759
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Ramya Sunil
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4759.001.patch
>
>
> snapshotDiff of two invalid snapshots which have the same names returns a 
> success.
> $ hadoop dfs -ls /user/foo/hdfs-snapshots/.snapshot
> Found 1 items
> drwx--   - foo foo  0 2013-04-26 00:53 
> /user/foo/hdfs-snapshots/.snapshot/s1
> $ hadoop snapshotDiff /user/foo/hdfs-snapshots invalid invalid 
> Difference between snapshot invalid and snapshot invalid under directory 
> /user/foo/hdfs-snapshots:
> -bash-4.1$ echo $?
> 0

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4760) Update inodeMap after node replacement

2013-04-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4760:


Attachment: HDFS-4760.001.patch

Initial patch.

> Update inodeMap after node replacement
> --
>
> Key: HDFS-4760
> URL: https://issues.apache.org/jira/browse/HDFS-4760
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4760.001.patch
>
>
> Similar with HDFS-4757, we need to update the inodeMap after node 
> replacement. Because a lot of node replacement happens in the snapshot branch 
> (e.g., INodeDirectory => INodeDirectoryWithSnapshot, INodeDirectory <=> 
> INodeDirectorySnapshottable, INodeFile => INodeFileWithSnapshot ...), this 
> becomes a non-trivial issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4740) Fixes for a few test failures on Windows

2013-04-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4740:


Summary: Fixes for a few test failures on Windows  (was: TestWebHdfsUrl and 
TestDFSUtil fail on Windows)

> Fixes for a few test failures on Windows
> 
>
> Key: HDFS-4740
> URL: https://issues.apache.org/jira/browse/HDFS-4740
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: HDFS-4740.002.patch, HDFS-4740.patch
>
>
> This issue is to track the following Windows test failures:
> # TestWebHdfsUrl#testSecureAuthParamsInUrl fails with timeout. The 4 second 
> timeout is too low.
> # TestDFSUtil#testGetNNUris depends on the 127.0.0.1->localhost reverse 
> lookup which does not happen on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4740) Fixes for a few test failures on Windows

2013-04-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4740:


Description: 
This issue is to track the following Windows test failures:
# TestWebHdfsUrl#testSecureAuthParamsInUrl fails with timeout. The 4 second 
timeout is too low.
# TestDFSUtil#testGetNNUris depends on the 127.0.0.1->localhost reverse lookup 
which does not happen on Windows.
# TestLargeBlock#testLargeBlockSize fails with timeout. This test takes a 
rather long time to complete on Windows. Part of the problem may be that we are 
using small VMs for Windows testing.

  was:
This issue is to track the following Windows test failures:
# TestWebHdfsUrl#testSecureAuthParamsInUrl fails with timeout. The 4 second 
timeout is too low.
# TestDFSUtil#testGetNNUris depends on the 127.0.0.1->localhost reverse lookup 
which does not happen on Windows.


> Fixes for a few test failures on Windows
> 
>
> Key: HDFS-4740
> URL: https://issues.apache.org/jira/browse/HDFS-4740
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: HDFS-4740.002.patch, HDFS-4740.patch
>
>
> This issue is to track the following Windows test failures:
> # TestWebHdfsUrl#testSecureAuthParamsInUrl fails with timeout. The 4 second 
> timeout is too low.
> # TestDFSUtil#testGetNNUris depends on the 127.0.0.1->localhost reverse 
> lookup which does not happen on Windows.
> # TestLargeBlock#testLargeBlockSize fails with timeout. This test takes a 
> rather long time to complete on Windows. Part of the problem may be that we 
> are using small VMs for Windows testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4740) Fixes for a few test failures on Windows

2013-04-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4740:


Attachment: HDFS-4740.003.patch

Adding one more trivial test fix to the patch for 
TestLargeBlock#testLargeBlockSize.

> Fixes for a few test failures on Windows
> 
>
> Key: HDFS-4740
> URL: https://issues.apache.org/jira/browse/HDFS-4740
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: HDFS-4740.002.patch, HDFS-4740.003.patch, HDFS-4740.patch
>
>
> This issue is to track the following Windows test failures:
> # TestWebHdfsUrl#testSecureAuthParamsInUrl fails with timeout. The 4 second 
> timeout is too low.
> # TestDFSUtil#testGetNNUris depends on the 127.0.0.1->localhost reverse 
> lookup which does not happen on Windows.
> # TestLargeBlock#testLargeBlockSize fails with timeout. This test takes a 
> rather long time to complete on Windows. Part of the problem may be that we 
> are using small VMs for Windows testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4754) Add an API in the namenode to mark a datanode as stale

2013-04-26 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643276#comment-13643276
 ] 

Aaron T. Myers commented on HDFS-4754:
--

Hi Nicolas, in general I'm a little leery of adding a client API which allows 
arbitrary clients to affect the perceived health of whole DNs, given the 
potential for abuse. The only mildly similar thing that currently exists that 
I'm aware of is the ClientProtocol#reportBadBlocks API, though that obviously 
works only with single replicas, not whole DNs.

That said, I won't block this change, especially if we make make it possible to 
disable the feature on the server side.

> Add an API in the namenode to mark a datanode as stale
> --
>
> Key: HDFS-4754
> URL: https://issues.apache.org/jira/browse/HDFS-4754
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Reporter: Nicolas Liochon
>Priority: Critical
>
> There is a detection of the stale datanodes in HDFS since HDFS-3703, with a 
> timeout, defaulted to 30s.
> There are two reasons to add an API to mark a node as stale even if the 
> timeout is not yet reached:
>  1) ZooKeeper can detect that a client is dead at any moment. So, for HBase, 
> we sometimes start the recovery before a node is marked staled. (even with 
> reasonable settings as: stale: 20s; HBase ZK timeout: 30s
>  2) Some third parties could detect that a node is dead before the timeout, 
> hence saving us the cost of retrying. An example or such hw is Arista, 
> presented here by [~tsuna] 
> http://tsunanet.net/~tsuna/fsf-hbase-meetup-april13.pdf, and confirmed in 
> HBASE-6290.
> As usual, even if the node is dead it can comeback before the 10 minutes 
> limit. So I would propose to set a timebound. The API would be
> namenode.markStale(String ipAddress, int port, long durationInMs);
> After durationInMs, the namenode would again rely only on its heartbeat to 
> decide.
> Thoughts?
> If there is no objections, and if nobody in the hdfs dev team has the time to 
> spend some time on it, I will give it a try for branch 2 & 3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4767) directory is not snapshottable after clrQuota

2013-04-26 Thread Ramya Sunil (JIRA)
Ramya Sunil created HDFS-4767:
-

 Summary: directory is not snapshottable after clrQuota
 Key: HDFS-4767
 URL: https://issues.apache.org/jira/browse/HDFS-4767
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Ramya Sunil
 Fix For: Snapshot (HDFS-2802)


1. hadoop dfs -mkdir /user/foo/hdfs-snapshots

2. hadoop dfsadmin -setQuota 1 /user/foo/hdfs-snapshots

3. hadoop dfsadmin -allowSnapshot /user/foo/hdfs-snapshots
Allowing snaphot on /user/foo/hdfs-snapshots succeeded

4. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
createSnapshot: org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: The 
NameSpace quota (directories and files) is exceeded: quota=1 file count=2

5. hadoop dfsadmin -clrQuota /user/foo/hdfs-snapshots

6. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
createSnapshot: 
org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotException: Directory is 
not a snapshottable directory: /user/foo/hdfs-snapshots

Step 6 should have succeeded since the directory was already snapshottable(in 
step 3)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4767) directory is not snapshottable after clrQuota

2013-04-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao reassigned HDFS-4767:
---

Assignee: Jing Zhao

> directory is not snapshottable after clrQuota
> -
>
> Key: HDFS-4767
> URL: https://issues.apache.org/jira/browse/HDFS-4767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Ramya Sunil
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
>
> 1. hadoop dfs -mkdir /user/foo/hdfs-snapshots
> 2. hadoop dfsadmin -setQuota 1 /user/foo/hdfs-snapshots
> 3. hadoop dfsadmin -allowSnapshot /user/foo/hdfs-snapshots
> Allowing snaphot on /user/foo/hdfs-snapshots succeeded
> 4. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
> createSnapshot: org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: The 
> NameSpace quota (directories and files) is exceeded: quota=1 file count=2
> 5. hadoop dfsadmin -clrQuota /user/foo/hdfs-snapshots
> 6. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
> createSnapshot: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotException: Directory 
> is not a snapshottable directory: /user/foo/hdfs-snapshots
> Step 6 should have succeeded since the directory was already snapshottable(in 
> step 3)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4768) block scanner does not close verification log when a block pool is being deleted (but the datanode remains running)

2013-04-26 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-4768:
---

 Summary: block scanner does not close verification log when a 
block pool is being deleted (but the datanode remains running)
 Key: HDFS-4768
 URL: https://issues.apache.org/jira/browse/HDFS-4768
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Chris Nauroth
Assignee: Chris Nauroth


HDFS-4274 fixed a file handle leak of the block scanner's verification logs by 
adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
method gets called for each live {{BlockPoolSliceScanner}} during datanode 
shutdown.  However, that patch did not consider the case of deleting a block 
pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode remains 
running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4768) block scanner does not close verification log when a block pool is being deleted (but the datanode remains running)

2013-04-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4768:


Target Version/s: 3.0.0, 2.0.4-alpha  (was: 3.0.0)

> block scanner does not close verification log when a block pool is being 
> deleted (but the datanode remains running)
> ---
>
> Key: HDFS-4768
> URL: https://issues.apache.org/jira/browse/HDFS-4768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>
> HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
> by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
> method gets called for each live {{BlockPoolSliceScanner}} during datanode 
> shutdown.  However, that patch did not consider the case of deleting a block 
> pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
> remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4768) block scanner does not close verification log when a block pool is being deleted (but the datanode remains running)

2013-04-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643288#comment-13643288
 ] 

Chris Nauroth commented on HDFS-4768:
-

This problem occasionally causes a failure in {{TestDeleteBlockPool}}.  The 
client request to delete the block pool races with the block scanner.  
Depending on the timing, the verification log file could remain in place, and 
then deleting the underlying storage fails.  I can reproduce the problem more 
easily on Windows.


> block scanner does not close verification log when a block pool is being 
> deleted (but the datanode remains running)
> ---
>
> Key: HDFS-4768
> URL: https://issues.apache.org/jira/browse/HDFS-4768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>
> HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
> by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
> method gets called for each live {{BlockPoolSliceScanner}} during datanode 
> shutdown.  However, that patch did not consider the case of deleting a block 
> pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
> remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4768) block scanner does not close verification log when a block pool is being deleted (but the datanode remains running)

2013-04-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4768:


Attachment: HDFS-4768.1.patch

This patch fixes the problem by shutting down the {{BlockPoolSliceScanner}} any 
time a block pool gets removed from the scanner via 
{{DataBlockScanner#removeBlockPool}}.  I ran multiple tests with this patch on 
Mac and Windows.  I consistently see a pass from {{TestDeleteBlockPool}}, which 
is what originally made me notice the problem.

> block scanner does not close verification log when a block pool is being 
> deleted (but the datanode remains running)
> ---
>
> Key: HDFS-4768
> URL: https://issues.apache.org/jira/browse/HDFS-4768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4768.1.patch
>
>
> HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
> by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
> method gets called for each live {{BlockPoolSliceScanner}} during datanode 
> shutdown.  However, that patch did not consider the case of deleting a block 
> pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
> remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4768) block scanner does not close verification log when a block pool is being deleted (but the datanode remains running)

2013-04-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4768:


Status: Patch Available  (was: Open)

> block scanner does not close verification log when a block pool is being 
> deleted (but the datanode remains running)
> ---
>
> Key: HDFS-4768
> URL: https://issues.apache.org/jira/browse/HDFS-4768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4768.1.patch
>
>
> HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
> by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
> method gets called for each live {{BlockPoolSliceScanner}} during datanode 
> shutdown.  However, that patch did not consider the case of deleting a block 
> pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
> remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4752) TestRBWBlockInvalidation fails on Windows due to file locking

2013-04-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-4752:


Status: Open  (was: Patch Available)

I'm canceling this patch for now, but leaving the issue opened, and leaving the 
patch here to point out where the file locking occurs.  On further 
investigation, we may need to allow share-delete access on the block and meta 
files anyway (not just in a testing mode).  I need to investigate more around 
potential interactions with concurrent deletBlockPool operations, i.e. 
HDFS-4768.

> TestRBWBlockInvalidation fails on Windows due to file locking
> -
>
> Key: HDFS-4752
> URL: https://issues.apache.org/jira/browse/HDFS-4752
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, test
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4752.1.patch
>
>
> The test attempts to invalidate a block by deleting its block file and meta 
> file.  This happens while a datanode thread holds the files open for write.  
> On Windows, this causes a locking conflict, and the test fails.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4767) directory is not snapshottable after clrQuota

2013-04-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4767:


Attachment: HDFS-4767.001.patch

Thanks for the report Ramya! Upload a patch to fix.

> directory is not snapshottable after clrQuota
> -
>
> Key: HDFS-4767
> URL: https://issues.apache.org/jira/browse/HDFS-4767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Ramya Sunil
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4767.001.patch
>
>
> 1. hadoop dfs -mkdir /user/foo/hdfs-snapshots
> 2. hadoop dfsadmin -setQuota 1 /user/foo/hdfs-snapshots
> 3. hadoop dfsadmin -allowSnapshot /user/foo/hdfs-snapshots
> Allowing snaphot on /user/foo/hdfs-snapshots succeeded
> 4. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
> createSnapshot: org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: The 
> NameSpace quota (directories and files) is exceeded: quota=1 file count=2
> 5. hadoop dfsadmin -clrQuota /user/foo/hdfs-snapshots
> 6. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
> createSnapshot: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotException: Directory 
> is not a snapshottable directory: /user/foo/hdfs-snapshots
> Step 6 should have succeeded since the directory was already snapshottable(in 
> step 3)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4659) Support setting execution bit for regular files

2013-04-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-4659:
-

Attachment: HDFS-4659.5.patch

Rebased the patch and fixed TestDFSPermission.

> Support setting execution bit for regular files
> ---
>
> Key: HDFS-4659
> URL: https://issues.apache.org/jira/browse/HDFS-4659
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4659.5.patch, HDFS-4659.patch, HDFS-4659.patch, 
> HDFS-4659.patch, HDFS-4659.patch
>
>
> By default regular files are created with mode "rw-r--r--", which is similar 
> as that on many UNIX platforms. However, setting execution bit for regular 
> files are not supported by HDFS. 
> It's the client's choice to set file access mode. HDFS would be easier to use 
> if it can support it, especially when HDFS is accessed by network file system 
> protocols. This JIRA is to track the change to support execution bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4740) Fixes for a few test failures on Windows

2013-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643369#comment-13643369
 ] 

Hadoop QA commented on HDFS-4740:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580757/HDFS-4740.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4327//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4327//console

This message is automatically generated.

> Fixes for a few test failures on Windows
> 
>
> Key: HDFS-4740
> URL: https://issues.apache.org/jira/browse/HDFS-4740
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: HDFS-4740.002.patch, HDFS-4740.003.patch, HDFS-4740.patch
>
>
> This issue is to track the following Windows test failures:
> # TestWebHdfsUrl#testSecureAuthParamsInUrl fails with timeout. The 4 second 
> timeout is too low.
> # TestDFSUtil#testGetNNUris depends on the 127.0.0.1->localhost reverse 
> lookup which does not happen on Windows.
> # TestLargeBlock#testLargeBlockSize fails with timeout. This test takes a 
> rather long time to complete on Windows. Part of the problem may be that we 
> are using small VMs for Windows testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4761) Refresh INodeMap in FSDirectory#reset()

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4761:
-

 Component/s: namenode
Hadoop Flags: Reviewed

+1 patch looks good.

> Refresh INodeMap in FSDirectory#reset()
> ---
>
> Key: HDFS-4761
> URL: https://issues.apache.org/jira/browse/HDFS-4761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Attachments: HDFS-4761.001.patch
>
>
> When resetting FSDirectory, the inodeMap should also be reset. I.e., we 
> should clear the inodeMap and then put in the new root node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4761) Refresh INodeMap in FSDirectory#reset()

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4761:
-

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Jing!

> Refresh INodeMap in FSDirectory#reset()
> ---
>
> Key: HDFS-4761
> URL: https://issues.apache.org/jira/browse/HDFS-4761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-4761.001.patch
>
>
> When resetting FSDirectory, the inodeMap should also be reset. I.e., we 
> should clear the inodeMap and then put in the new root node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4761) Refresh INodeMap in FSDirectory#reset()

2013-04-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643391#comment-13643391
 ] 

Hudson commented on HDFS-4761:
--

Integrated in Hadoop-trunk-Commit #3674 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3674/])
HDFS-4761. When resetting FSDirectory, the inodeMap should also be reset.  
Contributed by Jing Zhao (Revision 1476452)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1476452
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java


> Refresh INodeMap in FSDirectory#reset()
> ---
>
> Key: HDFS-4761
> URL: https://issues.apache.org/jira/browse/HDFS-4761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HDFS-4761.001.patch
>
>
> When resetting FSDirectory, the inodeMap should also be reset. I.e., we 
> should clear the inodeMap and then put in the new root node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4769) TestPersistBlocks#testRestartDfs fails on Windows

2013-04-26 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-4769:
---

 Summary: TestPersistBlocks#testRestartDfs fails on Windows
 Key: HDFS-4769
 URL: https://issues.apache.org/jira/browse/HDFS-4769
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
 Fix For: 3.0.0


Exception details attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4768) block scanner does not close verification log when a block pool is being deleted (but the datanode remains running)

2013-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643395#comment-13643395
 ] 

Hadoop QA commented on HDFS-4768:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580763/HDFS-4768.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4328//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4328//console

This message is automatically generated.

> block scanner does not close verification log when a block pool is being 
> deleted (but the datanode remains running)
> ---
>
> Key: HDFS-4768
> URL: https://issues.apache.org/jira/browse/HDFS-4768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4768.1.patch
>
>
> HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
> by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
> method gets called for each live {{BlockPoolSliceScanner}} during datanode 
> shutdown.  However, that patch did not consider the case of deleting a block 
> pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
> remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4767) directory is not snapshottable after clrQuota

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4767:
-

Hadoop Flags: Reviewed

+1 patch looks good.

> directory is not snapshottable after clrQuota
> -
>
> Key: HDFS-4767
> URL: https://issues.apache.org/jira/browse/HDFS-4767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Ramya Sunil
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4767.001.patch
>
>
> 1. hadoop dfs -mkdir /user/foo/hdfs-snapshots
> 2. hadoop dfsadmin -setQuota 1 /user/foo/hdfs-snapshots
> 3. hadoop dfsadmin -allowSnapshot /user/foo/hdfs-snapshots
> Allowing snaphot on /user/foo/hdfs-snapshots succeeded
> 4. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
> createSnapshot: org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: The 
> NameSpace quota (directories and files) is exceeded: quota=1 file count=2
> 5. hadoop dfsadmin -clrQuota /user/foo/hdfs-snapshots
> 6. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
> createSnapshot: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotException: Directory 
> is not a snapshottable directory: /user/foo/hdfs-snapshots
> Step 6 should have succeeded since the directory was already snapshottable(in 
> step 3)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4769) TestPersistBlocks#testRestartDfs fails on Windows

2013-04-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4769:


Attachment: TestPersistBlocks-testRestartDfs-failure1.txt

> TestPersistBlocks#testRestartDfs fails on Windows
> -
>
> Key: HDFS-4769
> URL: https://issues.apache.org/jira/browse/HDFS-4769
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: TestPersistBlocks-testRestartDfs-failure1.txt
>
>
> Exception details attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4768) block scanner does not close verification log when a block pool is being deleted (but the datanode remains running)

2013-04-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643398#comment-13643398
 ] 

Chris Nauroth commented on HDFS-4768:
-

{quote}
-1 tests included. The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
{quote}

There are no new tests, but this patch is required to get consistent successful 
runs for {{TestDeleteBlockPool}}.  I ran the test repeatedly with this patch on 
multiple platforms, and it passed every time.


> block scanner does not close verification log when a block pool is being 
> deleted (but the datanode remains running)
> ---
>
> Key: HDFS-4768
> URL: https://issues.apache.org/jira/browse/HDFS-4768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4768.1.patch
>
>
> HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
> by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
> method gets called for each live {{BlockPoolSliceScanner}} during datanode 
> shutdown.  However, that patch did not consider the case of deleting a block 
> pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
> remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4760) Update inodeMap after node replacement

2013-04-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4760:


Attachment: HDFS-4760.002.patch

Rebase the patch.

> Update inodeMap after node replacement
> --
>
> Key: HDFS-4760
> URL: https://issues.apache.org/jira/browse/HDFS-4760
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4760.001.patch, HDFS-4760.002.patch
>
>
> Similar with HDFS-4757, we need to update the inodeMap after node 
> replacement. Because a lot of node replacement happens in the snapshot branch 
> (e.g., INodeDirectory => INodeDirectoryWithSnapshot, INodeDirectory <=> 
> INodeDirectorySnapshottable, INodeFile => INodeFileWithSnapshot ...), this 
> becomes a non-trivial issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4769) TestPersistBlocks#testRestartDfs fails on Windows

2013-04-26 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-4769:


Assignee: Arpit Agarwal

> TestPersistBlocks#testRestartDfs fails on Windows
> -
>
> Key: HDFS-4769
> URL: https://issues.apache.org/jira/browse/HDFS-4769
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: TestPersistBlocks-testRestartDfs-failure1.txt
>
>
> Exception details attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4767) directory is not snapshottable after clrQuota

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HDFS-4767.
--

Resolution: Fixed

I have committed this.  Thanks, Jing!

> directory is not snapshottable after clrQuota
> -
>
> Key: HDFS-4767
> URL: https://issues.apache.org/jira/browse/HDFS-4767
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Ramya Sunil
>Assignee: Jing Zhao
> Fix For: Snapshot (HDFS-2802)
>
> Attachments: HDFS-4767.001.patch
>
>
> 1. hadoop dfs -mkdir /user/foo/hdfs-snapshots
> 2. hadoop dfsadmin -setQuota 1 /user/foo/hdfs-snapshots
> 3. hadoop dfsadmin -allowSnapshot /user/foo/hdfs-snapshots
> Allowing snaphot on /user/foo/hdfs-snapshots succeeded
> 4. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
> createSnapshot: org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: The 
> NameSpace quota (directories and files) is exceeded: quota=1 file count=2
> 5. hadoop dfsadmin -clrQuota /user/foo/hdfs-snapshots
> 6. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
> createSnapshot: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotException: Directory 
> is not a snapshottable directory: /user/foo/hdfs-snapshots
> Step 6 should have succeeded since the directory was already snapshottable(in 
> step 3)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-26 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643404#comment-13643404
 ] 

Konstantin Shvachko commented on HDFS-4489:
---

> hostile tone.

I apologize.
I guess what I really wanted to say that it is hostile to commit incompatible 
changes in a stabilization branch before the release plan is proposed.

>  would be great if you can really look at the patch

You know I did.
Thanks for responding on the thread related to 2.0.5. I understand the plan 
much better.
I appreciate your reverting HDFS-434.

There is still an incompatible change HDFS-4296. It is listed in new features 
for some reason.
Do you still need HDFS-4296 once HDFS-434 is reverted?
We did not change LayoutVersion since branch 0.23.

> Use InodeID as as an identifier of a file in HDFS protocols and APIs
> 
>
> Key: HDFS-4489
> URL: https://issues.apache.org/jira/browse/HDFS-4489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.0.5-beta
>
>
> The benefit of using InodeID to uniquely identify a file can be multiple 
> folds. Here are a few of them:
> 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
> HDFS-4437.
> 2. modification checks in tools like distcp. Since a file could have been 
> replaced or renamed to, the file name and size combination is no t reliable, 
> but the combination of file id and size is unique.
> 3. id based protocol support (e.g., NFS)
> 4. to make the pluggable block placement policy use fileid instead of 
> filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-04-26 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-3934:
---

Attachment: HDFS-3934.010.patch

bq. It'd be nice to continue to use the HostsFileReader and post-process the 
result. Otherwise it's a consistency/maintenance to copy-n-paste any new 
parsing functionality.

OK, I'll use the {{HostsFileReader}} parsing code.

bq. Why does the reader need to instantiate dummy DatanodeID?

You're right.  Re-using {{DatanodeID}} for this purpose doesn't reall ymake 
sense.  I created a new type called {{HostFileManager#Entry}} to represent host 
file entries.

bq. It appears to be for repeatedly making the somewhat fragile assumption that 
xferAddr is ipAddr+port? If that relationship changes, we've got a problem...

Fixed to use getIpAddr() + ":" + getXferPort() in all cases.

bq. Patch appears to have dropped supported for the node's registration name. 
Eli Collins wanted me to maintain that feature in HDFS-3990. If we need to keep 
it, doing a lookup and a canonical lookup (can trigger another dns lookup) 
isn't compatible with supporting the reg name.

Thanks for pointing this out.  I talked to Eli and he explained the distinction 
between registration names and hostnames to me.  I added back support for 
"registration names" and added a unit test to ensure this works properly.

bq. Doing a lookup followed by getCanonicalName is a bad idea. It does 2 more 
lookups: hostname -> PTR -> A so it can resolve CNAMES to IP to hostname. With 
this change I think it will cause 3 lookups per host.

One key feature of this change is that all the lookups happen when the include 
and exclude files are read.  *No* lookups happen during 
{{DatanodeManager#getDatanodeListForReport}}, or any of the other cases where 
we check the host file entries.

On the advice of Eli, I removed the call to {{getCanonicalName}}.  We can just 
use the name the user specified in the hosts file; that should be fine.

bq. Question about "// If no transfer port was specified, we take a guess". Why 
needed, and what are the ramifications for getting this wrong? Just a display 
issue?

We just don't have the information.  If the datanode is dead, we only know what 
the entry says in the hosts file(s).  If the entries don't have the port, we 
have to guess.  I don't see any way around this.  It might be more elegant if 
the web UI could understand the concept of "port is unknown," but adding that 
seems out of scope.

In addition to the unit tests, I did some manual testing on this and verified 
that it got rid of the double-counting of nodes in the web UI for me.

> duplicative dfs_hosts entries handled wrong
> ---
>
> Key: HDFS-3934
> URL: https://issues.apache.org/jira/browse/HDFS-3934
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.1-alpha
>Reporter: Andy Isaacson
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch, 
> HDFS-3934.003.patch, HDFS-3934.004.patch, HDFS-3934.005.patch, 
> HDFS-3934.006.patch, HDFS-3934.007.patch, HDFS-3934.008.patch, 
> HDFS-3934.010.patch
>
>
> A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
> hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
> after the NN restarts because {{getDatanodeListForReport}} does not handle 
> such a "pseudo-duplicate" correctly:
> # the "Remove any nodes we know about from the map" loop no longer has the 
> knowledge to remove the spurious entries
> # the "The remaining nodes are ones that are referenced by the hosts files" 
> loop does not do hostname lookups, so does not know that the IP and hostname 
> refer to the same host.
> Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
> the JSP output:  The *Node* column shows ":50010" as the nodename, with HTML 
> markup {{ href="http://:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=%2F&nnaddr=172.29.97.196:8020";
>  title="172.29.97.216:50010">:50010}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4769) TestPersistBlocks#testRestartDfs fails on Windows

2013-04-26 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643410#comment-13643410
 ] 

Arpit Agarwal commented on HDFS-4769:
-

Likely exposing a namenode race condition. The server exception goes away with 
a 10 second sleep between these two lines.

{code}
stream.write(DATA_AFTER_RESTART);
stream.close();
{code}

I'll investigate. Not sure if related to HDFS-3811.

> TestPersistBlocks#testRestartDfs fails on Windows
> -
>
> Key: HDFS-4769
> URL: https://issues.apache.org/jira/browse/HDFS-4769
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: TestPersistBlocks-testRestartDfs-failure1.txt
>
>
> Exception details attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4768) block scanner does not close verification log when a block pool is being deleted (but the datanode remains running)

2013-04-26 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643415#comment-13643415
 ] 

Arpit Agarwal commented on HDFS-4768:
-

+1

Verified on Windows and OS X. Nice find!

> block scanner does not close verification log when a block pool is being 
> deleted (but the datanode remains running)
> ---
>
> Key: HDFS-4768
> URL: https://issues.apache.org/jira/browse/HDFS-4768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0, 2.0.4-alpha
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-4768.1.patch
>
>
> HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
> by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
> method gets called for each live {{BlockPoolSliceScanner}} during datanode 
> shutdown.  However, that patch did not consider the case of deleting a block 
> pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
> remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3934) duplicative dfs_hosts entries handled wrong

2013-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643416#comment-13643416
 ] 

Hadoop QA commented on HDFS-3934:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580790/HDFS-3934.010.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4330//console

This message is automatically generated.

> duplicative dfs_hosts entries handled wrong
> ---
>
> Key: HDFS-3934
> URL: https://issues.apache.org/jira/browse/HDFS-3934
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.1-alpha
>Reporter: Andy Isaacson
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HDFS-3934.001.patch, HDFS-3934.002.patch, 
> HDFS-3934.003.patch, HDFS-3934.004.patch, HDFS-3934.005.patch, 
> HDFS-3934.006.patch, HDFS-3934.007.patch, HDFS-3934.008.patch, 
> HDFS-3934.010.patch
>
>
> A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by 
> hostname ends up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} 
> after the NN restarts because {{getDatanodeListForReport}} does not handle 
> such a "pseudo-duplicate" correctly:
> # the "Remove any nodes we know about from the map" loop no longer has the 
> knowledge to remove the spurious entries
> # the "The remaining nodes are ones that are referenced by the hosts files" 
> loop does not do hostname lookups, so does not know that the IP and hostname 
> refer to the same host.
> Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in 
> the JSP output:  The *Node* column shows ":50010" as the nodename, with HTML 
> markup {{ href="http://:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=%2F&nnaddr=172.29.97.196:8020";
>  title="172.29.97.216:50010">:50010}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-26 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643424#comment-13643424
 ] 

Suresh Srinivas commented on HDFS-4489:
---

bq. There is still an incompatible change HDFS-4296. It is listed in new 
features for some reason.
It is not incompatible and hence not marked as incompatible in jira or 
CHANGES.txt. It is currently listed as New Feature in CHANGES.txt. I do not 
think it should be listed under New Features section (though it does not 
qualify for Improvement, Bug fix any of that). I will move it to bug fix 
section.

bq. Do you still need HDFS-4296 once HDFS-434 is reverted?
It is needed because it corresponds to a layout version reserved in branch-1 
for concat. It is not related to HDFS-4434.


> Use InodeID as as an identifier of a file in HDFS protocols and APIs
> 
>
> Key: HDFS-4489
> URL: https://issues.apache.org/jira/browse/HDFS-4489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 2.0.5-beta
>
>
> The benefit of using InodeID to uniquely identify a file can be multiple 
> folds. Here are a few of them:
> 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
> HDFS-4437.
> 2. modification checks in tools like distcp. Since a file could have been 
> replaced or renamed to, the file name and size combination is no t reliable, 
> but the combination of file id and size is unique.
> 3. id based protocol support (e.g., NFS)
> 4. to make the pluggable block placement policy use fileid instead of 
> filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4758) Disallow nested snapshottable directories

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643429#comment-13643429
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4758:
--

> A common use case for snapshots is a short-lived snapshot which is then used 
> as the source for distcp. ...

This does not seem to related to nested snapshots.


> ... user might have different backup policies for /user (once every day) and 
> /user/hive (every 8 hrs) ...

In such case, admin may take a snapshot for all the subdirs in /user once per 
day.


> If the current restrictions takes away some use cases, so be it. Lets turn it 
> back on later if we cannot live without it.

Agree.  We can turn it back if necessary.

> Disallow nested snapshottable directories
> -
>
> Key: HDFS-4758
> URL: https://issues.apache.org/jira/browse/HDFS-4758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
>
> Nested snapshottable directories are supported by the current implementation. 
>  However, it seems that there are no good use cases for nested snapshottable 
> directories.  So we disable it for now until someone has a valid use case for 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4659) Support setting execution bit for regular files

2013-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643435#comment-13643435
 ] 

Hadoop QA commented on HDFS-4659:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580773/HDFS-4659.5.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4329//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4329//console

This message is automatically generated.

> Support setting execution bit for regular files
> ---
>
> Key: HDFS-4659
> URL: https://issues.apache.org/jira/browse/HDFS-4659
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
> Attachments: HDFS-4659.5.patch, HDFS-4659.patch, HDFS-4659.patch, 
> HDFS-4659.patch, HDFS-4659.patch
>
>
> By default regular files are created with mode "rw-r--r--", which is similar 
> as that on many UNIX platforms. However, setting execution bit for regular 
> files are not supported by HDFS. 
> It's the client's choice to set file access mode. HDFS would be easier to use 
> if it can support it, especially when HDFS is accessed by network file system 
> protocols. This JIRA is to track the change to support execution bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4758) Disallow nested snapshottable directories

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-4758:
-

Attachment: h4758_20140426.patch

h4758_20140426.patch: disallow nested snapshottable directories but allow it in 
tests.

> Disallow nested snapshottable directories
> -
>
> Key: HDFS-4758
> URL: https://issues.apache.org/jira/browse/HDFS-4758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: h4758_20140426.patch
>
>
> Nested snapshottable directories are supported by the current implementation. 
>  However, it seems that there are no good use cases for nested snapshottable 
> directories.  So we disable it for now until someone has a valid use case for 
> it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4760) Update inodeMap after node replacement

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643447#comment-13643447
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4760:
--

After some thinking, the relationship between FSDirectory and INodeMap should 
be has-a but not is-a.  Do you agree?

> Update inodeMap after node replacement
> --
>
> Key: HDFS-4760
> URL: https://issues.apache.org/jira/browse/HDFS-4760
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-4760.001.patch, HDFS-4760.002.patch
>
>
> Similar with HDFS-4757, we need to update the inodeMap after node 
> replacement. Because a lot of node replacement happens in the snapshot branch 
> (e.g., INodeDirectory => INodeDirectoryWithSnapshot, INodeDirectory <=> 
> INodeDirectorySnapshottable, INodeFile => INodeFileWithSnapshot ...), this 
> becomes a non-trivial issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-2802) Support for RW/RO snapshots in HDFS

2013-04-26 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2802:
-

Attachment: h2802_20130426.patch

h2802_20130426.patch

> Support for RW/RO snapshots in HDFS
> ---
>
> Key: HDFS-2802
> URL: https://issues.apache.org/jira/browse/HDFS-2802
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Hari Mankude
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: 2802.diff, 2802.patch, 2802.patch, h2802_20130417.patch, 
> h2802_20130422.patch, h2802_20130423.patch, h2802_20130425.patch, 
> h2802_20130426.patch, HDFS-2802.20121101.patch, 
> HDFS-2802-meeting-minutes-121101.txt, HDFSSnapshotsDesign.pdf, snap.patch, 
> snapshot-design.pdf, snapshot-design.tex, snapshot-one-pager.pdf, 
> Snapshots20121018.pdf, Snapshots20121030.pdf, Snapshots.pdf, 
> snapshot-testplan.pdf
>
>
> Snapshots are point in time images of parts of the filesystem or the entire 
> filesystem. Snapshots can be a read-only or a read-write point in time copy 
> of the filesystem. There are several use cases for snapshots in HDFS. I will 
> post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2802) Support for RW/RO snapshots in HDFS

2013-04-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643511#comment-13643511
 ] 

Hadoop QA commented on HDFS-2802:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580803/h2802_20130426.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 29 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4331//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4331//console

This message is automatically generated.

> Support for RW/RO snapshots in HDFS
> ---
>
> Key: HDFS-2802
> URL: https://issues.apache.org/jira/browse/HDFS-2802
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Reporter: Hari Mankude
>Assignee: Tsz Wo (Nicholas), SZE
> Attachments: 2802.diff, 2802.patch, 2802.patch, h2802_20130417.patch, 
> h2802_20130422.patch, h2802_20130423.patch, h2802_20130425.patch, 
> h2802_20130426.patch, HDFS-2802.20121101.patch, 
> HDFS-2802-meeting-minutes-121101.txt, HDFSSnapshotsDesign.pdf, snap.patch, 
> snapshot-design.pdf, snapshot-design.tex, snapshot-one-pager.pdf, 
> Snapshots20121018.pdf, Snapshots20121030.pdf, Snapshots.pdf, 
> snapshot-testplan.pdf
>
>
> Snapshots are point in time images of parts of the filesystem or the entire 
> filesystem. Snapshots can be a read-only or a read-write point in time copy 
> of the filesystem. There are several use cases for snapshots in HDFS. I will 
> post a detailed write-up soon with with more information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-26 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13643519#comment-13643519
 ] 

Ivan Mitic commented on HDFS-4610:
--

Thanks Chris and Arpit for the review and comments.

bq. I do see that this patch is making changes in TestCheckpoint and 
TestNNStorageRetentionFunctional though. Ivan, can you clarify if this patch 
makes these 2 tests pass for you?
Thanks, let me take a look. I did not explicitly try to debug every unittest I 
changed that was already failing. My main goal was to add the missing 
functionality for Windows and set us up for the better cross platform support. 

> Move to using common utils FileUtil#setReadable/Writable/Executable and 
> FileUtil#canRead/Write/Execute
> --
>
> Key: HDFS-4610
> URL: https://issues.apache.org/jira/browse/HDFS-4610
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HDFS-4610.commonfileutils.2.patch, 
> HDFS-4610.commonfileutils.patch
>
>
> Switch to using common utils described in HADOOP-9413 that work well 
> cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira