[jira] [Updated] (HDFS-6315) Decouple recording edit logs from FSDirectory

2014-05-02 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6315:
-

Attachment: HDFS-6315.001.patch

 Decouple recording edit logs from FSDirectory
 -

 Key: HDFS-6315
 URL: https://issues.apache.org/jira/browse/HDFS-6315
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6315.000.patch, HDFS-6315.001.patch


 Currently both FSNamesystem and FSDirectory record edit logs. This design 
 requires both FSNamesystem and FSDirectory to be tightly coupled together to 
 implement a durable namespace.
 This jira proposes to separate the responsibility of implementing the 
 namespace and providing durability with edit logs. Specifically, FSDirectory 
 implements the namespace (which should have no edit log operations), and 
 FSNamesystem implement durability by recording the edit logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6258) Namenode server-side storage for XAttrs

2014-05-02 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987464#comment-13987464
 ] 

Yi Liu commented on HDFS-6258:
--

Thanks Uma for your review.

{quote}
Javadoc for tests saying Restart NN and checkpoint scenarios. But actually we 
don't have tests related to that here. Of course, we can not cover then now as 
we don't persist them yet.
{quote}

Right, since the patch doesn't include persisting, so this patch will not 
contain Restarting NN/saving checkpoint. Let me remove the description about 
Restarting NN and Saving Checkpoint.

{quote}
Instead of assert true=true, we can add fail(msg) method after setXattr and 
empty catch block?. if that does not throw exception fail method will make your 
test fail and provide the message in fail. So, we need not have flag?
...
Same as above. there are many cases like that please check.
{quote}

Good point, we can use a Assert.fail after {{setXAttr}} statement.

{quote}
NNConf.java:
...
Please provide separate doc instead of combined one.
...
The second method missing javadoc.
{quote}
Right, let's update the javadoc.

{quote}

{code}
throw new AccessControlException(User doesn't have permission for xattr: 
  xAttr.getName());
{code}
I think the message could also tell about the passed namespace?
{quote}

OK, let's add the namespace info in the exception message.

{quote}
Do we have a test covering this max limit?
{quote}
Good point, let's update the test case to add a max limit check.

{quote}
Also It may be good to add the tests for checkNameNodeSafeMode here? Sorry for 
missing this suggestion in my previous feedback. Ralized now.
{quote}
checkNameNodeSafeMode test will be in another JIRA, it includes restarting 
NN/saving checkpoint.

{quote}
With the current code, in which case xAttrs can be null?
{quote}
OK. Let's add an assertion here.

{quote}
Since you don't use newXattrs var right now, you could have just called API to 
avoid java warning.
{quote}
OK. Let's remove the warning.

{quote}
{code}
@Rule
  public ExpectedException exception = ExpectedException.none();
{code}
What is the use of this Rule declaration in current tests?
{quote}
Right, it's not used now. Let's remove it.

 Namenode server-side storage for XAttrs
 ---

 Key: HDFS-6258
 URL: https://issues.apache.org/jira/browse/HDFS-6258
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
 HDFS-6258.4.patch, HDFS-6258.5.patch, HDFS-6258.patch


 Namenode Server-side storage for XAttrs: FSNamesystem and friends.
 Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5168) BlockPlacementPolicy does not work for cross node group dependencies

2014-05-02 Thread Nikola Vujic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikola Vujic updated HDFS-5168:
---

Attachment: HDFS-5168.patch

 BlockPlacementPolicy does not work for cross node group dependencies
 

 Key: HDFS-5168
 URL: https://issues.apache.org/jira/browse/HDFS-5168
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Nikola Vujic
Assignee: Nikola Vujic
Priority: Critical
 Attachments: HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
 HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
 HDFS-5168.patch, HDFS-5168.patch


 Block placement policies do not work for cross rack/node group dependencies. 
 In reality this is needed when compute servers and storage fall in two 
 independent fault domains, then both BlockPlacementPolicyDefault and 
 BlockPlacementPolicyWithNodeGroup are not able to provide proper block 
 placement.
 Let's suppose that we have Hadoop cluster with one rack with two servers, and 
 we run 2 VMs per server. Node group topology for this cluster would be:
  server1-vm1 - /d1/r1/n1
  server1-vm2 - /d1/r1/n1
  server2-vm1 - /d1/r1/n2
  server2-vm2 - /d1/r1/n2
 This is working fine as long as server and storage fall into the same fault 
 domain but if storage is in a different fault domain from the server, we will 
 not be able to handle that. For example, if storage of server1-vm1 is in the 
 same fault domain as storage of server2-vm1, then we must not place two 
 replicas on these two nodes although they are in different node groups.
 Two possible approaches:
 - One approach would be to define cross rack/node group dependencies and to 
 use them when excluding nodes from the search space. This looks as the 
 cleanest way to fix this as it requires minor changes in the 
 BlockPlacementPolicy classes.
 - Other approach would be to allow nodes to fall in more than one node group. 
 When we chose a node to hold a replica we have to exclude from the search 
 space all nodes from the node groups where the chosen node belongs. This 
 approach may require major changes in the NetworkTopology.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5168) BlockPlacementPolicy does not work for cross node group dependencies

2014-05-02 Thread Nikola Vujic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987516#comment-13987516
 ] 

Nikola Vujic commented on HDFS-5168:


I fixed javadoc warning and uploaded a patch.


 BlockPlacementPolicy does not work for cross node group dependencies
 

 Key: HDFS-5168
 URL: https://issues.apache.org/jira/browse/HDFS-5168
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Nikola Vujic
Assignee: Nikola Vujic
Priority: Critical
 Attachments: HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
 HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
 HDFS-5168.patch, HDFS-5168.patch


 Block placement policies do not work for cross rack/node group dependencies. 
 In reality this is needed when compute servers and storage fall in two 
 independent fault domains, then both BlockPlacementPolicyDefault and 
 BlockPlacementPolicyWithNodeGroup are not able to provide proper block 
 placement.
 Let's suppose that we have Hadoop cluster with one rack with two servers, and 
 we run 2 VMs per server. Node group topology for this cluster would be:
  server1-vm1 - /d1/r1/n1
  server1-vm2 - /d1/r1/n1
  server2-vm1 - /d1/r1/n2
  server2-vm2 - /d1/r1/n2
 This is working fine as long as server and storage fall into the same fault 
 domain but if storage is in a different fault domain from the server, we will 
 not be able to handle that. For example, if storage of server1-vm1 is in the 
 same fault domain as storage of server2-vm1, then we must not place two 
 replicas on these two nodes although they are in different node groups.
 Two possible approaches:
 - One approach would be to define cross rack/node group dependencies and to 
 use them when excluding nodes from the search space. This looks as the 
 cleanest way to fix this as it requires minor changes in the 
 BlockPlacementPolicy classes.
 - Other approach would be to allow nodes to fall in more than one node group. 
 When we chose a node to hold a replica we have to exclude from the search 
 space all nodes from the node groups where the chosen node belongs. This 
 approach may require major changes in the NetworkTopology.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6258) Namenode server-side storage for XAttrs

2014-05-02 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6258:
-

Attachment: HDFS-6258.6.patch

The new patch includes update for all comments, Thanks.

 Namenode server-side storage for XAttrs
 ---

 Key: HDFS-6258
 URL: https://issues.apache.org/jira/browse/HDFS-6258
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
 HDFS-6258.4.patch, HDFS-6258.5.patch, HDFS-6258.6.patch, HDFS-6258.patch


 Namenode Server-side storage for XAttrs: FSNamesystem and friends.
 Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6317) Add snapshot quota

2014-05-02 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987528#comment-13987528
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6317:
---

What is the use case for having snapshot quota?

 Add snapshot quota
 --

 Key: HDFS-6317
 URL: https://issues.apache.org/jira/browse/HDFS-6317
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Alex Shafer

 Either allow the 65k snapshot limit to be set with a configuration option  or 
 add a per-directory snapshot quota settable with the `hdfs dfsadmin` CLI and 
 viewable by appending fields to `hdfs dfs -count -q` output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5168) BlockPlacementPolicy does not work for cross node group dependencies

2014-05-02 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-5168:
--


+1 patch looks good.

 BlockPlacementPolicy does not work for cross node group dependencies
 

 Key: HDFS-5168
 URL: https://issues.apache.org/jira/browse/HDFS-5168
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Nikola Vujic
Assignee: Nikola Vujic
Priority: Critical
 Attachments: HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
 HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
 HDFS-5168.patch, HDFS-5168.patch


 Block placement policies do not work for cross rack/node group dependencies. 
 In reality this is needed when compute servers and storage fall in two 
 independent fault domains, then both BlockPlacementPolicyDefault and 
 BlockPlacementPolicyWithNodeGroup are not able to provide proper block 
 placement.
 Let's suppose that we have Hadoop cluster with one rack with two servers, and 
 we run 2 VMs per server. Node group topology for this cluster would be:
  server1-vm1 - /d1/r1/n1
  server1-vm2 - /d1/r1/n1
  server2-vm1 - /d1/r1/n2
  server2-vm2 - /d1/r1/n2
 This is working fine as long as server and storage fall into the same fault 
 domain but if storage is in a different fault domain from the server, we will 
 not be able to handle that. For example, if storage of server1-vm1 is in the 
 same fault domain as storage of server2-vm1, then we must not place two 
 replicas on these two nodes although they are in different node groups.
 Two possible approaches:
 - One approach would be to define cross rack/node group dependencies and to 
 use them when excluding nodes from the search space. This looks as the 
 cleanest way to fix this as it requires minor changes in the 
 BlockPlacementPolicy classes.
 - Other approach would be to allow nodes to fall in more than one node group. 
 When we chose a node to hold a replica we have to exclude from the search 
 space all nodes from the node groups where the chosen node belongs. This 
 approach may require major changes in the NetworkTopology.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5168) BlockPlacementPolicy does not work for cross node group dependencies

2014-05-02 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987570#comment-13987570
 ] 

Tsz Wo Nicholas Sze commented on HDFS-5168:
---

Just have found HDFS-6250, which is going fix TestBalancerWithNodeGroup.

 BlockPlacementPolicy does not work for cross node group dependencies
 

 Key: HDFS-5168
 URL: https://issues.apache.org/jira/browse/HDFS-5168
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Nikola Vujic
Assignee: Nikola Vujic
Priority: Critical
 Attachments: HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
 HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, HDFS-5168.patch, 
 HDFS-5168.patch, HDFS-5168.patch


 Block placement policies do not work for cross rack/node group dependencies. 
 In reality this is needed when compute servers and storage fall in two 
 independent fault domains, then both BlockPlacementPolicyDefault and 
 BlockPlacementPolicyWithNodeGroup are not able to provide proper block 
 placement.
 Let's suppose that we have Hadoop cluster with one rack with two servers, and 
 we run 2 VMs per server. Node group topology for this cluster would be:
  server1-vm1 - /d1/r1/n1
  server1-vm2 - /d1/r1/n1
  server2-vm1 - /d1/r1/n2
  server2-vm2 - /d1/r1/n2
 This is working fine as long as server and storage fall into the same fault 
 domain but if storage is in a different fault domain from the server, we will 
 not be able to handle that. For example, if storage of server1-vm1 is in the 
 same fault domain as storage of server2-vm1, then we must not place two 
 replicas on these two nodes although they are in different node groups.
 Two possible approaches:
 - One approach would be to define cross rack/node group dependencies and to 
 use them when excluding nodes from the search space. This looks as the 
 cleanest way to fix this as it requires minor changes in the 
 BlockPlacementPolicy classes.
 - Other approach would be to allow nodes to fall in more than one node group. 
 When we chose a node to hold a replica we have to exclude from the search 
 space all nodes from the node groups where the chosen node belongs. This 
 approach may require major changes in the NetworkTopology.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6258) Namenode server-side storage for XAttrs

2014-05-02 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987588#comment-13987588
 ] 

Uma Maheswara Rao G commented on HDFS-6258:
---

Thanks a lot for handling all the feedback, Yi!. Also thanks a lot to Charles 
for your reviews. I will commit the patch shortly to branch!

+1 on the latest patch

 Namenode server-side storage for XAttrs
 ---

 Key: HDFS-6258
 URL: https://issues.apache.org/jira/browse/HDFS-6258
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
 HDFS-6258.4.patch, HDFS-6258.5.patch, HDFS-6258.6.patch, HDFS-6258.patch


 Namenode Server-side storage for XAttrs: FSNamesystem and friends.
 Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6258) Namenode server-side storage for XAttrs

2014-05-02 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6258.
---

  Resolution: Fixed
   Fix Version/s: HDFS XAttrs (HDFS-2006)
Target Version/s: HDFS XAttrs (HDFS-2006)  (was: 3.0.0)
Hadoop Flags: Reviewed

I have just committed the patch to branch, thanks all.

 Namenode server-side storage for XAttrs
 ---

 Key: HDFS-6258
 URL: https://issues.apache.org/jira/browse/HDFS-6258
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6258.1.patch, HDFS-6258.2.patch, HDFS-6258.3.patch, 
 HDFS-6258.4.patch, HDFS-6258.5.patch, HDFS-6258.6.patch, HDFS-6258.patch


 Namenode Server-side storage for XAttrs: FSNamesystem and friends.
 Refine XAttrConfigFlag and AclConfigFlag to ConfigFlag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6252) Phase out the old web UI in HDFS

2014-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987595#comment-13987595
 ] 

Hudson commented on HDFS-6252:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #557 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/557/])
HDFS-6252. Phase out the old web UI in HDFS. Contributed by Haohui Mai. 
(wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1591732)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ClusterJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/browseBlock.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/browseDirectory.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/dataNodeHome.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/tail.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/block_info_xml.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/corrupt_files.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/corrupt_replicas_xml.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/decommission.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/decommission.xsl
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsclusterhealth.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsclusterhealth.xsl
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsclusterhealth_utils.xsl
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsnodelist.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/nn_browsedfscontent.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/journalstatus.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeJsp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHostsFiles.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryWebUi.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAWebUI.java


 Phase out the old web UI in HDFS
 

 Key: HDFS-6252
 URL: https://issues.apache.org/jira/browse/HDFS-6252
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.5.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-6252.000.patch, HDFS-6252.001.patch, 
 HDFS-6252.002.patch, HDFS-6252.003.patch, HDFS-6252.004.patch, 
 HDFS-6252.005.patch, HDFS-6252.006.patch


 We've deprecated hftp and hsftp in HDFS-5570, so if we always download file 
 from download this file on the 

[jira] [Commented] (HDFS-6289) HA failover can fail if there are pending DN messages for DNs which no longer exist

2014-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987596#comment-13987596
 ] 

Hudson commented on HDFS-6289:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #557 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/557/])
HDFS-6289. HA failover can fail if there are pending DN messages for DNs which 
no longer exist. Contributed by Aaron T. Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1591413)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingDataNodeMessages.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPendingCorruptDnMessages.java


 HA failover can fail if there are pending DN messages for DNs which no longer 
 exist
 ---

 Key: HDFS-6289
 URL: https://issues.apache.org/jira/browse/HDFS-6289
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Critical
 Fix For: 2.5.0

 Attachments: HDFS-6289.patch, HDFS-6289.patch


 In an HA setup, the standby NN may receive messages from DNs for blocks which 
 the standby NN is not yet aware of. It queues up these messages and replays 
 them when it next reads from the edit log or fails over. On a failover, all 
 of these pending DN messages must be processed successfully in order for the 
 failover to succeed. If one of these pending DN messages refers to a DN 
 storageId that no longer exists (because the DN with that transfer address 
 has been reformatted and has re-registered with the same transfer address) 
 then on transition to active the NN will not be able to process this DN 
 message and will suicide with an error like the following:
 {noformat}
 2014-04-25 14:23:17,922 FATAL namenode.NameNode 
 (NameNode.java:doImmediateShutdown(1525)) - Error encountered requiring NN 
 shutdown. Shutting down immediately.
 java.io.IOException: Cannot mark 
 blk_1073741825_900(stored=blk_1073741825_1001) as corrupt because datanode 
 127.0.0.1:33324 does not exist
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6304) Consolidate the logic of path resolution in FSDirectory

2014-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987598#comment-13987598
 ] 

Hudson commented on HDFS-6304:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #557 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/557/])
HDFS-6304. Consolidate the logic of path resolution in FSDirectory. Contributed 
by Haohui Mai. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1591411)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


 Consolidate the logic of path resolution in FSDirectory
 ---

 Key: HDFS-6304
 URL: https://issues.apache.org/jira/browse/HDFS-6304
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.5.0

 Attachments: HADOOP-10551.000.patch, HDFS-6304.000.patch


 Currently both FSDirectory and INodeDirectory provide helpers to resolve 
 paths to inodes. This jira proposes to move all these helpers into 
 FSDirectory to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6303) HDFS implementation of FileContext API for XAttrs.

2014-05-02 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987601#comment-13987601
 ] 

Uma Maheswara Rao G commented on HDFS-6303:
---

Patch looks good. 
But there will be a testcase failure with this. FileContext testcase trying to 
reuse FSXAttrBaseTest's testcases. But test set the max limit to 3 xattrs and 
for verifying max limit.  So,we have to fix this failure.

On fixing that case, I can commit this patch.


 HDFS implementation of FileContext API for XAttrs.
 --

 Key: HDFS-6303
 URL: https://issues.apache.org/jira/browse/HDFS-6303
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6303.2.patch, HDFS-6303.patch


 HDFS implementation of AbstractFileSystem and FileContext for XAttrs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6303) HDFS implementation of FileContext API for XAttrs.

2014-05-02 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6303:
-

Attachment: HDFS-6303.3.patch

Thanks Charles for refine javadoc. Thanks Uma for Code review. I have updated 
the patch to make testcase success.

 HDFS implementation of FileContext API for XAttrs.
 --

 Key: HDFS-6303
 URL: https://issues.apache.org/jira/browse/HDFS-6303
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6303.2.patch, HDFS-6303.3.patch, HDFS-6303.patch


 HDFS implementation of AbstractFileSystem and FileContext for XAttrs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6319) Various syntax and style cleanups

2014-05-02 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6319:
---

Attachment: HDFS-6319.4.patch

With --no-prefix this time.

 Various syntax and style cleanups
 -

 Key: HDFS-6319
 URL: https://issues.apache.org/jira/browse/HDFS-6319
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Charles Lamb
Assignee: Charles Lamb
 Attachments: HDFS-6319.1.patch, HDFS-6319.2.patch, HDFS-6319.3.patch, 
 HDFS-6319.4.patch


 Fix various style issues like if(, while(, [i.e. lack of a space after the 
 keyword],
 Extra whitespace and newlines
 if (...) return ... [lack of {}'s]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6319) Various syntax and style cleanups

2014-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987674#comment-13987674
 ] 

Hadoop QA commented on HDFS-6319:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643040/HDFS-6319.4.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6793//console

This message is automatically generated.

 Various syntax and style cleanups
 -

 Key: HDFS-6319
 URL: https://issues.apache.org/jira/browse/HDFS-6319
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Charles Lamb
Assignee: Charles Lamb
 Attachments: HDFS-6319.1.patch, HDFS-6319.2.patch, HDFS-6319.3.patch, 
 HDFS-6319.4.patch


 Fix various style issues like if(, while(, [i.e. lack of a space after the 
 keyword],
 Extra whitespace and newlines
 if (...) return ... [lack of {}'s]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6252) Phase out the old web UI in HDFS

2014-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987688#comment-13987688
 ] 

Hudson commented on HDFS-6252:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1748 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1748/])
HDFS-6252. Phase out the old web UI in HDFS. Contributed by Haohui Mai. 
(wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1591732)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ClusterJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/browseBlock.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/browseDirectory.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/dataNodeHome.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/tail.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/block_info_xml.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/corrupt_files.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/corrupt_replicas_xml.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/decommission.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/decommission.xsl
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsclusterhealth.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsclusterhealth.xsl
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsclusterhealth_utils.xsl
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsnodelist.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/nn_browsedfscontent.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/journalstatus.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeJsp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHostsFiles.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryWebUi.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAWebUI.java


 Phase out the old web UI in HDFS
 

 Key: HDFS-6252
 URL: https://issues.apache.org/jira/browse/HDFS-6252
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.5.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-6252.000.patch, HDFS-6252.001.patch, 
 HDFS-6252.002.patch, HDFS-6252.003.patch, HDFS-6252.004.patch, 
 HDFS-6252.005.patch, HDFS-6252.006.patch


 We've deprecated hftp and hsftp in HDFS-5570, so if we always download file 
 from download this file on the 

[jira] [Commented] (HDFS-6252) Phase out the old web UI in HDFS

2014-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987693#comment-13987693
 ] 

Hudson commented on HDFS-6252:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1774 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1774/])
HDFS-6252. Phase out the old web UI in HDFS. Contributed by Haohui Mai. 
(wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1591732)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/JspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DatanodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ClusterJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/browseBlock.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/browseDirectory.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/dataNodeHome.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode/tail.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/block_info_xml.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/corrupt_files.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/corrupt_replicas_xml.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/decommission.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/decommission.xsl
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsclusterhealth.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsclusterhealth.xsl
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsclusterhealth_utils.xsl
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfsnodelist.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/nn_browsedfscontent.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/journalstatus.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/status.jsp
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDecommission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/TestNNWithQJM.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDatanodeJsp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestClusterJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCorruptFilesJsp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHostsFiles.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeJspHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSecondaryWebUi.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAWebUI.java


 Phase out the old web UI in HDFS
 

 Key: HDFS-6252
 URL: https://issues.apache.org/jira/browse/HDFS-6252
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.5.0
Reporter: Fengdong Yu
Assignee: Haohui Mai
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-6252.000.patch, HDFS-6252.001.patch, 
 HDFS-6252.002.patch, HDFS-6252.003.patch, HDFS-6252.004.patch, 
 HDFS-6252.005.patch, HDFS-6252.006.patch


 We've deprecated hftp and hsftp in HDFS-5570, so if we always download file 
 from download this file on 

[jira] [Commented] (HDFS-6289) HA failover can fail if there are pending DN messages for DNs which no longer exist

2014-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987689#comment-13987689
 ] 

Hudson commented on HDFS-6289:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1748 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1748/])
HDFS-6289. HA failover can fail if there are pending DN messages for DNs which 
no longer exist. Contributed by Aaron T. Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1591413)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingDataNodeMessages.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPendingCorruptDnMessages.java


 HA failover can fail if there are pending DN messages for DNs which no longer 
 exist
 ---

 Key: HDFS-6289
 URL: https://issues.apache.org/jira/browse/HDFS-6289
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Critical
 Fix For: 2.5.0

 Attachments: HDFS-6289.patch, HDFS-6289.patch


 In an HA setup, the standby NN may receive messages from DNs for blocks which 
 the standby NN is not yet aware of. It queues up these messages and replays 
 them when it next reads from the edit log or fails over. On a failover, all 
 of these pending DN messages must be processed successfully in order for the 
 failover to succeed. If one of these pending DN messages refers to a DN 
 storageId that no longer exists (because the DN with that transfer address 
 has been reformatted and has re-registered with the same transfer address) 
 then on transition to active the NN will not be able to process this DN 
 message and will suicide with an error like the following:
 {noformat}
 2014-04-25 14:23:17,922 FATAL namenode.NameNode 
 (NameNode.java:doImmediateShutdown(1525)) - Error encountered requiring NN 
 shutdown. Shutting down immediately.
 java.io.IOException: Cannot mark 
 blk_1073741825_900(stored=blk_1073741825_1001) as corrupt because datanode 
 127.0.0.1:33324 does not exist
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6304) Consolidate the logic of path resolution in FSDirectory

2014-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987696#comment-13987696
 ] 

Hudson commented on HDFS-6304:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1774 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1774/])
HDFS-6304. Consolidate the logic of path resolution in FSDirectory. Contributed 
by Haohui Mai. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1591411)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


 Consolidate the logic of path resolution in FSDirectory
 ---

 Key: HDFS-6304
 URL: https://issues.apache.org/jira/browse/HDFS-6304
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.5.0

 Attachments: HADOOP-10551.000.patch, HDFS-6304.000.patch


 Currently both FSDirectory and INodeDirectory provide helpers to resolve 
 paths to inodes. This jira proposes to move all these helpers into 
 FSDirectory to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6304) Consolidate the logic of path resolution in FSDirectory

2014-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987691#comment-13987691
 ] 

Hudson commented on HDFS-6304:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1748 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1748/])
HDFS-6304. Consolidate the logic of path resolution in FSDirectory. Contributed 
by Haohui Mai. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1591411)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java


 Consolidate the logic of path resolution in FSDirectory
 ---

 Key: HDFS-6304
 URL: https://issues.apache.org/jira/browse/HDFS-6304
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.5.0

 Attachments: HADOOP-10551.000.patch, HDFS-6304.000.patch


 Currently both FSDirectory and INodeDirectory provide helpers to resolve 
 paths to inodes. This jira proposes to move all these helpers into 
 FSDirectory to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6289) HA failover can fail if there are pending DN messages for DNs which no longer exist

2014-05-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987694#comment-13987694
 ] 

Hudson commented on HDFS-6289:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1774 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1774/])
HDFS-6289. HA failover can fail if there are pending DN messages for DNs which 
no longer exist. Contributed by Aaron T. Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1591413)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingDataNodeMessages.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPendingCorruptDnMessages.java


 HA failover can fail if there are pending DN messages for DNs which no longer 
 exist
 ---

 Key: HDFS-6289
 URL: https://issues.apache.org/jira/browse/HDFS-6289
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.4.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
Priority: Critical
 Fix For: 2.5.0

 Attachments: HDFS-6289.patch, HDFS-6289.patch


 In an HA setup, the standby NN may receive messages from DNs for blocks which 
 the standby NN is not yet aware of. It queues up these messages and replays 
 them when it next reads from the edit log or fails over. On a failover, all 
 of these pending DN messages must be processed successfully in order for the 
 failover to succeed. If one of these pending DN messages refers to a DN 
 storageId that no longer exists (because the DN with that transfer address 
 has been reformatted and has re-registered with the same transfer address) 
 then on transition to active the NN will not be able to process this DN 
 message and will suicide with an error like the following:
 {noformat}
 2014-04-25 14:23:17,922 FATAL namenode.NameNode 
 (NameNode.java:doImmediateShutdown(1525)) - Error encountered requiring NN 
 shutdown. Shutting down immediately.
 java.io.IOException: Cannot mark 
 blk_1073741825_900(stored=blk_1073741825_1001) as corrupt because datanode 
 127.0.0.1:33324 does not exist
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6315) Decouple recording edit logs from FSDirectory

2014-05-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987703#comment-13987703
 ] 

Daryn Sharp commented on HDFS-6315:
---

I haven't throughly examined the patch.  I think moving the edit calls up to 
FSN makes sense.  I'm a bit uneasy about leaking the fsd locking up into the 
fsn.  That should be an internal implementation detail of the fsd.  I see that 
snapshots are locking it but that's due to reuse/abuse of the lock to protect 
the snapshot manager.  I don't think the pattern should be replicated to all 
operations.

 Decouple recording edit logs from FSDirectory
 -

 Key: HDFS-6315
 URL: https://issues.apache.org/jira/browse/HDFS-6315
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6315.000.patch, HDFS-6315.001.patch


 Currently both FSNamesystem and FSDirectory record edit logs. This design 
 requires both FSNamesystem and FSDirectory to be tightly coupled together to 
 implement a durable namespace.
 This jira proposes to separate the responsibility of implementing the 
 namespace and providing durability with edit logs. Specifically, FSDirectory 
 implements the namespace (which should have no edit log operations), and 
 FSNamesystem implement durability by recording the edit logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6310) PBImageXmlWriter should output information about Delegation Tokens

2014-05-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987722#comment-13987722
 ] 

Daryn Sharp commented on HDFS-6310:
---

bq. As long as the key is out this should be fine. What I don't want is that an 
attacker can print out the token using oiv and then use the token directly, 
which might give an attacker a handy way to attack the system.

If the attacker has access to the image, it's already game over whether oiv 
accurately dumps the image or not.  They can extract the tokens and keys in 
other ways so why impede legitimate debugging?

bq. I guess we might need to clarify what compatibility means in this context.

My incompatible concern isn't strictly related to this jira so we probably 
shouldn't debate it here.   Just an explanation:  It's a general concern that 
any existing tools built around the output are being broken.  Perhaps this is 
fine for a major release, but within minor releases I'm not so sure.

Examples: Is the official documentation for using pig still valid?  Does 
twitter's tool still work?

http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html
https://github.com/twitter/hdfs-du

 PBImageXmlWriter should output information about Delegation Tokens
 --

 Key: HDFS-6310
 URL: https://issues.apache.org/jira/browse/HDFS-6310
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HDFS-6310.patch


 Separated from HDFS-6293.
 The 2.4.0 pb-fsimage does contain tokens, but OfflineImageViewer with -XML 
 option does not show any tokens.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6312) WebHdfs HA failover is broken on secure clusters

2014-05-02 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp reassigned HDFS-6312:
-

Assignee: Daryn Sharp

 WebHdfs HA failover is broken on secure clusters
 

 Key: HDFS-6312
 URL: https://issues.apache.org/jira/browse/HDFS-6312
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0, 2.4.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker

 When webhdfs does a failover, it blanks out the delegation token.  This will 
 cause subsequent operations against the other NN to acquire a new token.  
 Tasks cannot acquire a token (no kerberos credentials) so jobs will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6312) WebHdfs HA failover is broken on secure clusters

2014-05-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987725#comment-13987725
 ] 

Daryn Sharp commented on HDFS-6312:
---

Yes, of course on the tests. :)  I don't enjoy being the test subject for 
security changes with no code coverage...  I may fix this as part of a later 
change.

 WebHdfs HA failover is broken on secure clusters
 

 Key: HDFS-6312
 URL: https://issues.apache.org/jira/browse/HDFS-6312
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0, 2.4.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker

 When webhdfs does a failover, it blanks out the delegation token.  This will 
 cause subsequent operations against the other NN to acquire a new token.  
 Tasks cannot acquire a token (no kerberos credentials) so jobs will fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6133) Make Balancer support exclude specified path

2014-05-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987736#comment-13987736
 ] 

Daryn Sharp commented on HDFS-6133:
---

I'm not sure the NN needs to know about the pinning.  It may only need to be 
something the client tells the DN when establishing the pipeline.  Perhaps it 
could be stored in the existing block metadata file is, but I don't know how 
extensible the file is, whether it can be changed with backwards compatibility, 
etc.   Other approaches might be using the sticky bit or the existence of a 
second file to denote a pinned block.

 Make Balancer support exclude specified path
 

 Key: HDFS-6133
 URL: https://issues.apache.org/jira/browse/HDFS-6133
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer, namenode
Reporter: zhaoyunjiong
Assignee: zhaoyunjiong
 Attachments: HDFS-6133.patch


 Currently, run Balancer will destroying Regionserver's data locality.
 If getBlocks could exclude blocks belongs to files which have specific path 
 prefix, like /hbase, then we can run Balancer without destroying 
 Regionserver's data locality.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-172) Quota exceed exception creates file of size 0

2014-05-02 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987765#comment-13987765
 ] 

Tsz Wo Nicholas Sze commented on HDFS-172:
--

I believe quota exceed when copying a file could result a partial file in 
general, not necessarily a zero size file.  For example, suppose quote for a 
dir is 100MB and block size is 64MB.  Then copying a 200MB file to dir (with 
replication = 1) will result in quota exceed exception when writing the second 
block.  The file written to dir will only have 64MB.

The summary of this JIRA should be revised to Quota exceed exception results 
partial created files.  However, this seems a common behavior but not a bug.  
No?

 Quota exceed exception creates file of size 0
 -

 Key: HDFS-172
 URL: https://issues.apache.org/jira/browse/HDFS-172
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ravi Phulari

 Empty file of size 0 is created when QuotaExceed exception occurs while 
 copying a file. This file is created with the same name of which file copy is 
 tried .
 I.E if operation 
 Hadoop fs -copyFromLocal testFile1 /testDir   
 Fails due to quota exceed exception then testFile1 of size 0 is created in 
 testDir on HDFS.
 Steps to verify 
 1) Create testDir and apply space quota of 16kb
 2) Copy file say testFile of size greater than 16kb from local file system
 3) You should see QuotaException error 
 4) testFile of size 0 is created in testDir which is not expected .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6165) hdfs dfs -rm -r and hdfs -rmdir commands can't remove empty directory

2014-05-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987766#comment-13987766
 ] 

Daryn Sharp commented on HDFS-6165:
---

I'm struggling to catch up on jiras so could you post a short summary of why 
changing the behavior of subaccess is insufficient?  It appears to maybe only 
be used by delete so adding another parameter to permission checking might be 
overkill.

Also, as previously mentioned you can't catch RemoteException since it's 
specific to hdfs and thus FsShell won't work correctly with other filesystems.

 hdfs dfs -rm -r and hdfs -rmdir commands can't remove empty directory 
 --

 Key: HDFS-6165
 URL: https://issues.apache.org/jira/browse/HDFS-6165
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.3.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
Priority: Minor
 Attachments: HDFS-6165.001.patch, HDFS-6165.002.patch, 
 HDFS-6165.003.patch, HDFS-6165.004.patch, HDFS-6165.004.patch, 
 HDFS-6165.005.patch, HDFS-6165.006.patch, HDFS-6165.006.patch


 Given a directory owned by user A with WRITE permission containing an empty 
 directory owned by user B, it is not possible to delete user B's empty 
 directory with either hdfs dfs -rm -r or hdfs dfs -rmdir. Because the 
 current implementation requires FULL permission of the empty directory, and 
 throws exception. 
 On the other hand, on linux, rm -r and rmdir command can remove empty 
 directory as long as the parent directory has WRITE permission (and prefix 
 component of the path have EXECUTE permission), For the tested OSes, some 
 prompt user asking for confirmation, some don't.
 Here's a reproduction:
 {code}
 [root@vm01 ~]# hdfs dfs -ls /user/
 Found 4 items
 drwxr-xr-x   - userabc users   0 2013-05-03 01:55 /user/userabc
 drwxr-xr-x   - hdfssupergroup  0 2013-05-03 00:28 /user/hdfs
 drwxrwxrwx   - mapred  hadoop  0 2013-05-03 00:13 /user/history
 drwxr-xr-x   - hdfssupergroup  0 2013-04-14 16:46 /user/hive
 [root@vm01 ~]# hdfs dfs -ls /user/userabc
 Found 8 items
 drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
 drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
 drwx--   - userabc users  0 2013-05-03 01:06 
 /user/userabc/.staging
 drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
 drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
 drwxr-xr-x   - hdfsusers  0 2013-05-03 01:54 /user/userabc/foo
 drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
 /user/userabc/maven_source
 drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
 /user/userabc/test-restore
 [root@vm01 ~]# hdfs dfs -ls /user/userabc/foo/
 [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -r -skipTrash /user/userabc/foo
 rm: Permission denied: user=userabc, access=ALL, 
 inode=/user/userabc/foo:hdfs:users:drwxr-xr-x
 {code}
 The super user can delete the directory.
 {code}
 [root@vm01 ~]# sudo -u hdfs hdfs dfs -rm -r -skipTrash /user/userabc/foo
 Deleted /user/userabc/foo
 {code}
 The same is not true for files, however. They have the correct behavior.
 {code}
 [root@vm01 ~]# sudo -u hdfs hdfs dfs -touchz /user/userabc/foo-file
 [root@vm01 ~]# hdfs dfs -ls /user/userabc/
 Found 8 items
 drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
 drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
 drwx--   - userabc users  0 2013-05-03 01:06 
 /user/userabc/.staging
 drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
 drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
 -rw-r--r--   1 hdfsusers  0 2013-05-03 02:11 
 /user/userabc/foo-file
 drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
 /user/userabc/maven_source
 drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
 /user/userabc/test-restore
 [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -skipTrash /user/userabc/foo-file
 Deleted /user/userabc/foo-file
 {code}
 Using hdfs dfs -rmdir command:
 {code}
 bash-4.1$ hadoop fs -lsr /
 lsr: DEPRECATED: Please use 'ls -R' instead.
 drwxr-xr-x   - hdfs supergroup  0 2014-03-25 16:29 /user
 drwxr-xr-x   - hdfs   supergroup  0 2014-03-25 16:28 /user/hdfs
 drwxr-xr-x   - usrabc users   0 2014-03-28 23:39 /user/usrabc
 drwxr-xr-x   - abcabc 0 2014-03-28 23:39 
 /user/usrabc/foo-empty1
 [root@vm01 usrabc]# su usrabc
 [usrabc@vm01 ~]$ hdfs dfs -rmdir /user/usrabc/foo-empty1
 rmdir: Permission denied: user=usrabc, access=ALL, 
 inode=/user/usrabc/foo-empty1:abc:abc:drwxr-xr-x
 {code}



--
This message was 

[jira] [Commented] (HDFS-6317) Add snapshot quota

2014-05-02 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987773#comment-13987773
 ] 

Jakob Homan commented on HDFS-6317:
---

To allow admins to limit the number of snapshots per directory to a number 
below the currently hardcoded value of 64k.

 Add snapshot quota
 --

 Key: HDFS-6317
 URL: https://issues.apache.org/jira/browse/HDFS-6317
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Alex Shafer

 Either allow the 65k snapshot limit to be set with a configuration option  or 
 add a per-directory snapshot quota settable with the `hdfs dfsadmin` CLI and 
 viewable by appending fields to `hdfs dfs -count -q` output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6317) Add snapshot quota

2014-05-02 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987779#comment-13987779
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6317:
---

What is the reason to limit the number of snapshots?  Note that we already has 
namespace quota which will as well limit the namespace usage used by snapshots.

 Add snapshot quota
 --

 Key: HDFS-6317
 URL: https://issues.apache.org/jira/browse/HDFS-6317
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Alex Shafer

 Either allow the 65k snapshot limit to be set with a configuration option  or 
 add a per-directory snapshot quota settable with the `hdfs dfsadmin` CLI and 
 viewable by appending fields to `hdfs dfs -count -q` output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6324) Shift XAttr helper code out for reuse.

2014-05-02 Thread Yi Liu (JIRA)
Yi Liu created HDFS-6324:


 Summary: Shift XAttr helper code out for reuse.
 Key: HDFS-6324
 URL: https://issues.apache.org/jira/browse/HDFS-6324
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: HDFS XAttrs (HDFS-2006)


Shift XAttr helper code out for reuse: in DFSClient and WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6324) Shift XAttr helper code out for reuse.

2014-05-02 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6324:
-

Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-2006

 Shift XAttr helper code out for reuse.
 --

 Key: HDFS-6324
 URL: https://issues.apache.org/jira/browse/HDFS-6324
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: HDFS XAttrs (HDFS-2006)


 Shift XAttr helper code out for reuse: in DFSClient and WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6317) Add snapshot quota

2014-05-02 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987791#comment-13987791
 ] 

Jakob Homan commented on HDFS-6317:
---

First as a matter of cluster policy for the admins, they may wish to impose 
such limits.  Second, to help control the potential size of recursive tools 
used against snapshotted directories against an (effectively) unbounded tree.

bq.  Note that we already has namespace quota which will as well limit the 
namespace usage used by snapshots.
Noted, but this is a different quota.

 Add snapshot quota
 --

 Key: HDFS-6317
 URL: https://issues.apache.org/jira/browse/HDFS-6317
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Alex Shafer

 Either allow the 65k snapshot limit to be set with a configuration option  or 
 add a per-directory snapshot quota settable with the `hdfs dfsadmin` CLI and 
 viewable by appending fields to `hdfs dfs -count -q` output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6324) Shift XAttr helper code out for reuse.

2014-05-02 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-6324:
-

Attachment: HDFS-6324.patch

A separate class for XAttr helper methods.

 Shift XAttr helper code out for reuse.
 --

 Key: HDFS-6324
 URL: https://issues.apache.org/jira/browse/HDFS-6324
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6324.patch


 Shift XAttr helper code out for reuse: in DFSClient and WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HDFS-6324) Shift XAttr helper code out for reuse.

2014-05-02 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-6324 started by Yi Liu.

 Shift XAttr helper code out for reuse.
 --

 Key: HDFS-6324
 URL: https://issues.apache.org/jira/browse/HDFS-6324
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6324.patch


 Shift XAttr helper code out for reuse: in DFSClient and WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6303) HDFS implementation of FileContext API for XAttrs.

2014-05-02 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987814#comment-13987814
 ] 

Uma Maheswara Rao G commented on HDFS-6303:
---

+ on the latest patch. Thanks a lot, Yi and Charles!


 HDFS implementation of FileContext API for XAttrs.
 --

 Key: HDFS-6303
 URL: https://issues.apache.org/jira/browse/HDFS-6303
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6303.2.patch, HDFS-6303.3.patch, HDFS-6303.patch


 HDFS implementation of AbstractFileSystem and FileContext for XAttrs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HDFS-6303) HDFS implementation of FileContext API for XAttrs.

2014-05-02 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987814#comment-13987814
 ] 

Uma Maheswara Rao G edited comment on HDFS-6303 at 5/2/14 3:44 PM:
---

+1 on the latest patch. Thanks a lot, Yi and Charles!



was (Author: umamaheswararao):
+ on the latest patch. Thanks a lot, Yi and Charles!


 HDFS implementation of FileContext API for XAttrs.
 --

 Key: HDFS-6303
 URL: https://issues.apache.org/jira/browse/HDFS-6303
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6303.2.patch, HDFS-6303.3.patch, HDFS-6303.patch


 HDFS implementation of AbstractFileSystem and FileContext for XAttrs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6303) HDFS implementation of FileContext API for XAttrs.

2014-05-02 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-6303.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

I have committed this to branch.

 HDFS implementation of FileContext API for XAttrs.
 --

 Key: HDFS-6303
 URL: https://issues.apache.org/jira/browse/HDFS-6303
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6303.2.patch, HDFS-6303.3.patch, HDFS-6303.patch


 HDFS implementation of AbstractFileSystem and FileContext for XAttrs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6324) Shift XAttr helper code out for reuse.

2014-05-02 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987848#comment-13987848
 ] 

Charles Lamb commented on HDFS-6324:


I just have a handful of little things.

XAttrHelper.java:

Please add a newline after public class XAttrHelper {

+   * Name can not be null and value can be null, also name prefix 
+   * will be validated. 

Name can not be null. Value can be null. The name and prefix are validated.

int prefixIndex = name.indexOf(.);

Please add a final.

} else if (prefixIndex == name.length() -1) {

s/-1/- 1/

+  throw new HadoopIllegalArgumentException(XAttr name must be prefixed 
with +
+   user/trusted/security/system and '.');

An XAttr name must be prefixed with user/trusted/security/system, followed by 
a '.'
Same change further down in the same method.

 String prefix = name.substring(0, prefixIndex);

Please add a final.


 Shift XAttr helper code out for reuse.
 --

 Key: HDFS-6324
 URL: https://issues.apache.org/jira/browse/HDFS-6324
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor
 Fix For: HDFS XAttrs (HDFS-2006)

 Attachments: HDFS-6324.patch


 Shift XAttr helper code out for reuse: in DFSClient and WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6165) hdfs dfs -rm -r and hdfs -rmdir commands can't remove empty directory

2014-05-02 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987870#comment-13987870
 ] 

Yongjun Zhang commented on HDFS-6165:
-

Hi [~daryn],

Thanks a lot for your comments. Adding the additional parameter is to avoid 
changing the behavior of any other callers to checkPermission. If subAccess is 
only used by deleteInternal method, then we can actually remove the additional 
parameter and change the behavior when we check subAccess.

About catching RemoteException, thanks for pointing that out, I will do some 
further study how to address that.


 hdfs dfs -rm -r and hdfs -rmdir commands can't remove empty directory 
 --

 Key: HDFS-6165
 URL: https://issues.apache.org/jira/browse/HDFS-6165
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.3.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
Priority: Minor
 Attachments: HDFS-6165.001.patch, HDFS-6165.002.patch, 
 HDFS-6165.003.patch, HDFS-6165.004.patch, HDFS-6165.004.patch, 
 HDFS-6165.005.patch, HDFS-6165.006.patch, HDFS-6165.006.patch


 Given a directory owned by user A with WRITE permission containing an empty 
 directory owned by user B, it is not possible to delete user B's empty 
 directory with either hdfs dfs -rm -r or hdfs dfs -rmdir. Because the 
 current implementation requires FULL permission of the empty directory, and 
 throws exception. 
 On the other hand, on linux, rm -r and rmdir command can remove empty 
 directory as long as the parent directory has WRITE permission (and prefix 
 component of the path have EXECUTE permission), For the tested OSes, some 
 prompt user asking for confirmation, some don't.
 Here's a reproduction:
 {code}
 [root@vm01 ~]# hdfs dfs -ls /user/
 Found 4 items
 drwxr-xr-x   - userabc users   0 2013-05-03 01:55 /user/userabc
 drwxr-xr-x   - hdfssupergroup  0 2013-05-03 00:28 /user/hdfs
 drwxrwxrwx   - mapred  hadoop  0 2013-05-03 00:13 /user/history
 drwxr-xr-x   - hdfssupergroup  0 2013-04-14 16:46 /user/hive
 [root@vm01 ~]# hdfs dfs -ls /user/userabc
 Found 8 items
 drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
 drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
 drwx--   - userabc users  0 2013-05-03 01:06 
 /user/userabc/.staging
 drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
 drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
 drwxr-xr-x   - hdfsusers  0 2013-05-03 01:54 /user/userabc/foo
 drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
 /user/userabc/maven_source
 drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
 /user/userabc/test-restore
 [root@vm01 ~]# hdfs dfs -ls /user/userabc/foo/
 [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -r -skipTrash /user/userabc/foo
 rm: Permission denied: user=userabc, access=ALL, 
 inode=/user/userabc/foo:hdfs:users:drwxr-xr-x
 {code}
 The super user can delete the directory.
 {code}
 [root@vm01 ~]# sudo -u hdfs hdfs dfs -rm -r -skipTrash /user/userabc/foo
 Deleted /user/userabc/foo
 {code}
 The same is not true for files, however. They have the correct behavior.
 {code}
 [root@vm01 ~]# sudo -u hdfs hdfs dfs -touchz /user/userabc/foo-file
 [root@vm01 ~]# hdfs dfs -ls /user/userabc/
 Found 8 items
 drwx--   - userabc users  0 2013-05-02 17:00 /user/userabc/.Trash
 drwxr-xr-x   - userabc users  0 2013-05-03 01:34 /user/userabc/.cm
 drwx--   - userabc users  0 2013-05-03 01:06 
 /user/userabc/.staging
 drwxr-xr-x   - userabc users  0 2013-04-14 18:31 /user/userabc/apps
 drwxr-xr-x   - userabc users  0 2013-04-30 18:05 /user/userabc/ds
 -rw-r--r--   1 hdfsusers  0 2013-05-03 02:11 
 /user/userabc/foo-file
 drwxr-xr-x   - userabc users  0 2013-04-30 16:18 
 /user/userabc/maven_source
 drwxr-xr-x   - hdfsusers  0 2013-05-03 01:40 
 /user/userabc/test-restore
 [root@vm01 ~]# sudo -u userabc hdfs dfs -rm -skipTrash /user/userabc/foo-file
 Deleted /user/userabc/foo-file
 {code}
 Using hdfs dfs -rmdir command:
 {code}
 bash-4.1$ hadoop fs -lsr /
 lsr: DEPRECATED: Please use 'ls -R' instead.
 drwxr-xr-x   - hdfs supergroup  0 2014-03-25 16:29 /user
 drwxr-xr-x   - hdfs   supergroup  0 2014-03-25 16:28 /user/hdfs
 drwxr-xr-x   - usrabc users   0 2014-03-28 23:39 /user/usrabc
 drwxr-xr-x   - abcabc 0 2014-03-28 23:39 
 /user/usrabc/foo-empty1
 [root@vm01 usrabc]# su usrabc
 [usrabc@vm01 ~]$ hdfs dfs -rmdir /user/usrabc/foo-empty1
 rmdir: Permission denied: user=usrabc, access=ALL, 
 inode=/user/usrabc/foo-empty1:abc:abc:drwxr-xr-x
 

[jira] [Commented] (HDFS-6290) File is not closed in OfflineImageViewerPB#run()

2014-05-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987886#comment-13987886
 ] 

Ted Yu commented on HDFS-6290:
--

This was observed in trunk code.

 File is not closed in OfflineImageViewerPB#run()
 

 Key: HDFS-6290
 URL: https://issues.apache.org/jira/browse/HDFS-6290
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor

 {code}
   } else if (processor.equals(XML)) {
 new PBImageXmlWriter(conf, out).visit(new RandomAccessFile(inputFile,
 r));
 {code}
 The RandomAccessFile instance should be closed before the method returns.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6317) Add snapshot quota

2014-05-02 Thread Alex Shafer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987976#comment-13987976
 ] 

Alex Shafer commented on HDFS-6317:
---

Using snapshots will create a floor in the directory's namespace quota- the 
namespace quota used at the time a snapshot is taken will become a minimum 
until snapshots are deleted. The ability to configure the limit or set an 
additional quota would mitigate this issue.

 Add snapshot quota
 --

 Key: HDFS-6317
 URL: https://issues.apache.org/jira/browse/HDFS-6317
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Alex Shafer

 Either allow the 65k snapshot limit to be set with a configuration option  or 
 add a per-directory snapshot quota settable with the `hdfs dfsadmin` CLI and 
 viewable by appending fields to `hdfs dfs -count -q` output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6315) Decouple recording edit logs from FSDirectory

2014-05-02 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13987995#comment-13987995
 ] 

Haohui Mai commented on HDFS-6315:
--

Make sense. I've just checked all usage of FSDirectory.readLock() and 
FSDirectory.writeLock(). It looks to me that all callers already have the lock 
of the FSNamesystem before getting the lock of FSDirectory. I plan to remove 
the lock of FSDirectory in subsequent jiras.

 Decouple recording edit logs from FSDirectory
 -

 Key: HDFS-6315
 URL: https://issues.apache.org/jira/browse/HDFS-6315
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6315.000.patch, HDFS-6315.001.patch


 Currently both FSNamesystem and FSDirectory record edit logs. This design 
 requires both FSNamesystem and FSDirectory to be tightly coupled together to 
 implement a durable namespace.
 This jira proposes to separate the responsibility of implementing the 
 namespace and providing durability with edit logs. Specifically, FSDirectory 
 implements the namespace (which should have no edit log operations), and 
 FSNamesystem implement durability by recording the edit logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5436) Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988004#comment-13988004
 ] 

Tsuyoshi OZAWA commented on HDFS-5436:
--

[~wheat9], [~arpitagarwal] I found that the latest patch removes 
HftpFileSystem.java. It blocks HDFS-6193, which is a blocker of 2.4.1 release. 
Can you recover it? 

{code}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HftpFileSystem.java
deleted file mode 100644
{code}

 Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web
 

 Key: HDFS-5436
 URL: https://issues.apache.org/jira/browse/HDFS-5436
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HDFS-5436.000.patch, HDFS-5436.001.patch, 
 HDFS-5436.002.patch


 Currently HsftpFilesystem, HftpFileSystem and WebHdfsFileSystem reside in 
 different packages. This force several methods in ByteInputStream and 
 URLConnectionFactory to be public methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5436) Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-5436:
-

Affects Version/s: 2.4.1

 Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web
 

 Key: HDFS-5436
 URL: https://issues.apache.org/jira/browse/HDFS-5436
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.1
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HDFS-5436.000.patch, HDFS-5436.001.patch, 
 HDFS-5436.002.patch


 Currently HsftpFilesystem, HftpFileSystem and WebHdfsFileSystem reside in 
 different packages. This force several methods in ByteInputStream and 
 URLConnectionFactory to be public methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5436) Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-5436:
-

Priority: Blocker  (was: Major)

 Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web
 

 Key: HDFS-5436
 URL: https://issues.apache.org/jira/browse/HDFS-5436
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.4.1
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Blocker
 Fix For: 2.3.0

 Attachments: HDFS-5436.000.patch, HDFS-5436.001.patch, 
 HDFS-5436.002.patch


 Currently HsftpFilesystem, HftpFileSystem and WebHdfsFileSystem reside in 
 different packages. This force several methods in ByteInputStream and 
 URLConnectionFactory to be public methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5436) Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HDFS-5436:
-

 Priority: Major  (was: Blocker)
Affects Version/s: (was: 2.4.1)

 Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web
 

 Key: HDFS-5436
 URL: https://issues.apache.org/jira/browse/HDFS-5436
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HDFS-5436.000.patch, HDFS-5436.001.patch, 
 HDFS-5436.002.patch


 Currently HsftpFilesystem, HftpFileSystem and WebHdfsFileSystem reside in 
 different packages. This force several methods in ByteInputStream and 
 URLConnectionFactory to be public methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5436) Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988023#comment-13988023
 ] 

Tsuyoshi OZAWA commented on HDFS-5436:
--

Oops, there is HftpFileSystem in this patch. Sorry for my mistake. 

 Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web
 

 Key: HDFS-5436
 URL: https://issues.apache.org/jira/browse/HDFS-5436
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HDFS-5436.000.patch, HDFS-5436.001.patch, 
 HDFS-5436.002.patch


 Currently HsftpFilesystem, HftpFileSystem and WebHdfsFileSystem reside in 
 different packages. This force several methods in ByteInputStream and 
 URLConnectionFactory to be public methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988028#comment-13988028
 ] 

Tsuyoshi OZAWA commented on HDFS-6193:
--

is HftpFileSystem missing from trunk now? Please correct me if I get wrong.

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases

2014-05-02 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-6255.
-

Resolution: Not a Problem

Hi, [~schu].  I'm going to resolve this based on my last comment about fuse 
itself likely rejecting access before fuse_dfs gets involved at all.  If you 
find that this isn't what's happening in your environment and it really does 
look like a bad interaction with HDFS ACLs, please feel free to reopen.  Thank 
you.

 fuse_dfs will not adhere to ACL permissions in some cases
 -

 Key: HDFS-6255
 URL: https://issues.apache.org/jira/browse/HDFS-6255
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: Chris Nauroth

 As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. 
 Then I set a new acl group:jenkins:rwx on /tmp/acl_dir.
 {code}
 jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir
 # file: /tmp/acl_dir
 # owner: hdfs
 # group: supergroup
 user::rwx
 group::---
 group:jenkins:rwx
 mask::rwx
 other::---
 {code}
 Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create 
 a file and directory inside.
 {code}
 [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1
 [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1
 hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/
 Found 2 items
 drwxr-xr-x   - jenkins supergroup  0 2014-04-17 19:11 
 /tmp/acl_dir/testdir1
 -rw-r--r--   1 jenkins supergroup  0 2014-04-17 19:11 
 /tmp/acl_dir/testfile1
 [jenkins@hdfs-vanilla-1 ~]$ 
 {code}
 However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a 
 fuse_dfs mount, I get permission denied. Same permission denied when I try to 
 create or list files.
 {code}
 [jenkins@hdfs-vanilla-1 tmp]$ ls -l
 total 16
 drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir
 drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2
 drwxr-xr-x 3 mapred  nobody 4096 Mar 11 03:53 mapred
 drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli
 -rwx-- 1 hdfsnobody0 Apr  7 17:18 tf1
 [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir
 bash: cd: acl_dir: Permission denied
 [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2
 touch: cannot touch `acl_dir/testfile2': Permission denied
 [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2
 mkdir: cannot create directory `acl_dir/testdir2': Permission denied
 [jenkins@hdfs-vanilla-1 tmp]$ 
 {code}
 The fuse_dfs debug output doesn't show any error for the above operations:
 {code}
 unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48
unique: 18, success, outsize: 32
 unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80
 readdir[0] from 0
unique: 19, success, outsize: 312
 unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56
 getattr /tmp
unique: 20, success, outsize: 120
 unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80
unique: 21, success, outsize: 16
 unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
unique: 22, success, outsize: 16
 unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56
 getattr /tmp
unique: 23, success, outsize: 120
 unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 24, success, outsize: 120
 unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 25, success, outsize: 120
 unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 26, success, outsize: 120
 unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 27, success, outsize: 120
 unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 28, success, outsize: 120
 {code}
 In other scenarios, ACL permissions are enforced successfully. For example, 
 as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set 
 the acl user:jenkins:--- on the directory. On the fuse mount, I am not able 
 to ls, mkdir, or touch to that directory as jenkins user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6315) Decouple recording edit logs from FSDirectory

2014-05-02 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988043#comment-13988043
 ] 

Haohui Mai commented on HDFS-6315:
--

Sorry. It looks like HDFS-5693 makes some optimization that holds the lock of 
FSDirectory without holding the lock of FSNamesystem. The change can be 
reverted when removing the lock of FSDirectory.

 Decouple recording edit logs from FSDirectory
 -

 Key: HDFS-6315
 URL: https://issues.apache.org/jira/browse/HDFS-6315
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6315.000.patch, HDFS-6315.001.patch


 Currently both FSNamesystem and FSDirectory record edit logs. This design 
 requires both FSNamesystem and FSDirectory to be tightly coupled together to 
 implement a durable namespace.
 This jira proposes to separate the responsibility of implementing the 
 namespace and providing durability with edit logs. Specifically, FSDirectory 
 implements the namespace (which should have no edit log operations), and 
 FSNamesystem implement durability by recording the edit logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-02 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988058#comment-13988058
 ] 

Gera Shegalov commented on HDFS-6193:
-

Hi [~ozawa], yeah Hftp was recently kicked out with HDFS-5570

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5436) Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988061#comment-13988061
 ] 

Tsuyoshi OZAWA commented on HDFS-5436:
--

Ah, I found HDFS-5570 deprecated and removed HftpFileSystem. I'm sorry for all 
the fuss.

 Move HsFtpFileSystem and HFtpFileSystem into org.apache.hdfs.web
 

 Key: HDFS-5436
 URL: https://issues.apache.org/jira/browse/HDFS-5436
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.3.0

 Attachments: HDFS-5436.000.patch, HDFS-5436.001.patch, 
 HDFS-5436.002.patch


 Currently HsftpFilesystem, HftpFileSystem and WebHdfsFileSystem reside in 
 different packages. This force several methods in ByteInputStream and 
 URLConnectionFactory to be public methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-2139) Fast copy for HDFS.

2014-05-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988059#comment-13988059
 ] 

Daryn Sharp commented on HDFS-2139:
---

I glanced through the patch but haven't studied it.  Initial questions:
# Are block tokens being checked for this operation?
# Does the DN enforce no linking of UC blocks?

 Fast copy for HDFS.
 ---

 Key: HDFS-2139
 URL: https://issues.apache.org/jira/browse/HDFS-2139
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Pritam Damania
 Attachments: HDFS-2139.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 There is a need to perform fast file copy on HDFS. The fast copy mechanism 
 for a file works as
 follows :
 1) Query metadata for all blocks of the source file.
 2) For each block 'b' of the file, find out its datanode locations.
 3) For each block of the file, add an empty block to the namesystem for
 the destination file.
 4) For each location of the block, instruct the datanode to make a local
 copy of that block.
 5) Once each datanode has copied over its respective blocks, they
 report to the namenode about it.
 6) Wait for all blocks to be copied and exit.
 This would speed up the copying process considerably by removing top of
 the rack data transfers.
 Note : An extra improvement, would be to instruct the datanode to create a
 hardlink of the block file if we are copying a block on the same datanode



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6290) File is not closed in OfflineImageViewerPB#run()

2014-05-02 Thread Hardik Pandya (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988095#comment-13988095
 ] 

Hardik Pandya commented on HDFS-6290:
-

cool, thanks!

 File is not closed in OfflineImageViewerPB#run()
 

 Key: HDFS-6290
 URL: https://issues.apache.org/jira/browse/HDFS-6290
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor

 {code}
   } else if (processor.equals(XML)) {
 new PBImageXmlWriter(conf, out).visit(new RandomAccessFile(inputFile,
 r));
 {code}
 The RandomAccessFile instance should be closed before the method returns.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6319) Various syntax and style cleanups

2014-05-02 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6319:
---

Attachment: HDFS-6319.6.patch

 Various syntax and style cleanups
 -

 Key: HDFS-6319
 URL: https://issues.apache.org/jira/browse/HDFS-6319
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Charles Lamb
Assignee: Charles Lamb
 Attachments: HDFS-6319.1.patch, HDFS-6319.2.patch, HDFS-6319.3.patch, 
 HDFS-6319.4.patch, HDFS-6319.6.patch


 Fix various style issues like if(, while(, [i.e. lack of a space after the 
 keyword],
 Extra whitespace and newlines
 if (...) return ... [lack of {}'s]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988084#comment-13988084
 ] 

Tsuyoshi OZAWA commented on HDFS-6193:
--

Thanks for the pointing, [~jira.shegalov]! Now I could apply your patch against 
branch-2.4.0. However, some compilation error occurs with the patch.

In HftpFileSystem, RangeHeaderInputStream cannot call the super constructor as 
follows:
{code}
static class RangeHeaderInputStream extends ByteRangeInputStream {
   RangeHeaderInputStream(RangeHeaderUrlOpener o, RangeHeaderUrlOpener r)
throws IOException {
  super(o, r, true);
}
{code}

FileDataServlet: the method ExceptionHandler.toHttpStatus is missing:
{code}
  response.sendError(ExceptionHandler.toHttpStatus(e),
  StringUtils.stringifyException(e));
{code}

Can you check them? Thanks!

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-02 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988171#comment-13988171
 ] 

Gera Shegalov commented on HDFS-6193:
-

Will upload a fixed version shortly.

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6305) WebHdfs response decoding may throw RuntimeExceptions

2014-05-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988180#comment-13988180
 ] 

Chris Nauroth commented on HDFS-6305:
-

Nice work, Daryn.  Thank you for adding the failure tests too.  I have just one 
question.

bq. The json decoding routines do not validate the expected fields are present 
which may cause NPEs.

It wasn't clear to me if this patch is really doing anything to address this 
part.  I don't see addition of explicit validation checks, and it also doesn't 
look like an NPE would be handled any differently.  Can you please clarify?  
Thanks!

 WebHdfs response decoding may throw RuntimeExceptions
 -

 Key: HDFS-6305
 URL: https://issues.apache.org/jira/browse/HDFS-6305
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-6305.patch


 WebHdfs does not guard against exceptions while decoding the response 
 payload.  The json parser will throw RunTime exceptions on malformed 
 responses.  The json decoding routines do not validate the expected fields 
 are present which may cause NPEs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6295) Add decommissioning state and node state filtering to dfsadmin

2014-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988182#comment-13988182
 ] 

Hadoop QA commented on HDFS-6295:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12642971/hdfs-6295-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6796//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6796//console

This message is automatically generated.

 Add decommissioning state and node state filtering to dfsadmin
 

 Key: HDFS-6295
 URL: https://issues.apache.org/jira/browse/HDFS-6295
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-6295-1.patch, hdfs-6295-2.patch


 One of the few admin-friendly ways of viewing the list of decommissioning 
 nodes is via hdfs dfsadmin -report. However, this lists *all* the datanodes 
 on the cluster, which is prohibitive for large clusters, and also requires 
 manual parsing to look at the decom status. It'd be nicer if we could fetch 
 and display only decommissioning nodes (or just live and dead nodes for that 
 matter).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6326) WebHdfs ACL compatibility is broken

2014-05-02 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-6326:
-

 Summary: WebHdfs ACL compatibility is broken
 Key: HDFS-6326
 URL: https://issues.apache.org/jira/browse/HDFS-6326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.4.0, 3.0.0
Reporter: Daryn Sharp
Priority: Blocker


2.4 ACL support is completely incompatible with 2.4 webhdfs servers.  The NN 
throws an {{IllegalArgumentException}} exception.

{code}
hadoop fs -ls webhdfs://nn/
Found 21 items
ls: Invalid value for webhdfs parameter op: No enum constant 
org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETACLSTATUS
[... 20 more times...]
{code}




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6325) Append should fail if the last block has unsufficient number of replicas

2014-05-02 Thread Konstantin Shvachko (JIRA)
Konstantin Shvachko created HDFS-6325:
-

 Summary: Append should fail if the last block has unsufficient 
number of replicas
 Key: HDFS-6325
 URL: https://issues.apache.org/jira/browse/HDFS-6325
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Konstantin Shvachko
 Fix For: 2.5.0


Currently append() succeeds on a file with the last block that has no replicas. 
But the subsequent updatePipeline() fails as there are no replicas with the 
exception Unable to retrieve blocks locations for last block. This leaves the 
file unclosed, and others can not do anything with it until its lease expires.
The solution is to check replicas of the last block on the NameNode and fail 
during append() rather than during updatePipeline().
How many replicas should be present before NN allows to append? I see two 
options:
# min-replication: allow append if the last block is minimally replicated (1 by 
default)
# full-replication: allow append if the last block is fully replicated (3 by 
default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6325) Append should fail if the last block has unsufficient number of replicas

2014-05-02 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988209#comment-13988209
 ] 

Konstantin Shvachko commented on HDFS-6325:
---

An easy way to reproduce this is to create a file on a cluster, then restart NN 
without DNs, manually leave SafeMode, and try to append data to the file. On a 
real cluster one can kill 3 DNs (on different racks), then some blocks will be 
missing, and it is likely that one of them will be the last of some file.

 Append should fail if the last block has unsufficient number of replicas
 

 Key: HDFS-6325
 URL: https://issues.apache.org/jira/browse/HDFS-6325
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Konstantin Shvachko

 Currently append() succeeds on a file with the last block that has no 
 replicas. But the subsequent updatePipeline() fails as there are no replicas 
 with the exception Unable to retrieve blocks locations for last block. This 
 leaves the file unclosed, and others can not do anything with it until its 
 lease expires.
 The solution is to check replicas of the last block on the NameNode and fail 
 during append() rather than during updatePipeline().
 How many replicas should be present before NN allows to append? I see two 
 options:
 # min-replication: allow append if the last block is minimally replicated (1 
 by default)
 # full-replication: allow append if the last block is fully replicated (3 by 
 default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has unsufficient number of replicas

2014-05-02 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-6325:
--

Fix Version/s: (was: 2.5.0)

 Append should fail if the last block has unsufficient number of replicas
 

 Key: HDFS-6325
 URL: https://issues.apache.org/jira/browse/HDFS-6325
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Konstantin Shvachko

 Currently append() succeeds on a file with the last block that has no 
 replicas. But the subsequent updatePipeline() fails as there are no replicas 
 with the exception Unable to retrieve blocks locations for last block. This 
 leaves the file unclosed, and others can not do anything with it until its 
 lease expires.
 The solution is to check replicas of the last block on the NameNode and fail 
 during append() rather than during updatePipeline().
 How many replicas should be present before NN allows to append? I see two 
 options:
 # min-replication: allow append if the last block is minimally replicated (1 
 by default)
 # full-replication: allow append if the last block is fully replicated (3 by 
 default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has unsufficient number of replicas

2014-05-02 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-6325:
--

Target Version/s: 2.5.0

 Append should fail if the last block has unsufficient number of replicas
 

 Key: HDFS-6325
 URL: https://issues.apache.org/jira/browse/HDFS-6325
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Konstantin Shvachko

 Currently append() succeeds on a file with the last block that has no 
 replicas. But the subsequent updatePipeline() fails as there are no replicas 
 with the exception Unable to retrieve blocks locations for last block. This 
 leaves the file unclosed, and others can not do anything with it until its 
 lease expires.
 The solution is to check replicas of the last block on the NameNode and fail 
 during append() rather than during updatePipeline().
 How many replicas should be present before NN allows to append? I see two 
 options:
 # min-replication: allow append if the last block is minimally replicated (1 
 by default)
 # full-replication: allow append if the last block is fully replicated (3 by 
 default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6315) Decouple recording edit logs from FSDirectory

2014-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988215#comment-13988215
 ] 

Hadoop QA commented on HDFS-6315:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643001/HDFS-6315.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6795//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6795//console

This message is automatically generated.

 Decouple recording edit logs from FSDirectory
 -

 Key: HDFS-6315
 URL: https://issues.apache.org/jira/browse/HDFS-6315
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6315.000.patch, HDFS-6315.001.patch


 Currently both FSNamesystem and FSDirectory record edit logs. This design 
 requires both FSNamesystem and FSDirectory to be tightly coupled together to 
 implement a durable namespace.
 This jira proposes to separate the responsibility of implementing the 
 namespace and providing durability with edit logs. Specifically, FSDirectory 
 implements the namespace (which should have no edit log operations), and 
 FSNamesystem implement durability by recording the edit logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6326) WebHdfs ACL compatibility is broken

2014-05-02 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988218#comment-13988218
 ] 

Daryn Sharp commented on HDFS-6326:
---

{{hasAcl}} has RPC specific checks to determine if ACLs are not supported.  The 
jersey initialization of the query param fails to create the enum.  There is no 
good way to detect that ACLS aren't enabled w/o a hack to catch illegal 
arguments and mince the message string. :(

The ACLs should have been returned in the file status, or at least a flag to 
indicate ACLs are present.

 WebHdfs ACL compatibility is broken
 ---

 Key: HDFS-6326
 URL: https://issues.apache.org/jira/browse/HDFS-6326
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0, 2.4.0
Reporter: Daryn Sharp
Priority: Blocker

 2.4 ACL support is completely incompatible with 2.4 webhdfs servers.  The NN 
 throws an {{IllegalArgumentException}} exception.
 {code}
 hadoop fs -ls webhdfs://nn/
 Found 21 items
 ls: Invalid value for webhdfs parameter op: No enum constant 
 org.apache.hadoop.hdfs.web.resources.GetOpParam.Op.GETACLSTATUS
 [... 20 more times...]
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6268) Better sorting in NetworkTopology#pseudoSortByDistance when no local node is found

2014-05-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6268:
--

Attachment: hdfs-6268-3.patch

Here's a new patch which does the rework ATM proposed. This has the nice 
property of also code sharing with NetworkTopologyWithNodeGroup.

I realized we can't integrate the decom/stale sorting since NetworkTopology is 
in hadoop-common, but I don't think that's a biggie.

 Better sorting in NetworkTopology#pseudoSortByDistance when no local node is 
 found
 --

 Key: HDFS-6268
 URL: https://issues.apache.org/jira/browse/HDFS-6268
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor
 Attachments: hdfs-6268-1.patch, hdfs-6268-2.patch, hdfs-6268-3.patch


 In NetworkTopology#pseudoSortByDistance, if no local node is found, it will 
 always place the first rack local node in the list in front.
 This became an issue when a dataset was loaded from a single datanode. This 
 datanode ended up being the first replica for all the blocks in the dataset. 
 When running an Impala query, the non-local reads when reading past a block 
 boundary were all hitting this node, meaning massive load skew.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6287) Add vecsum test of libhdfs read access times

2014-05-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988246#comment-13988246
 ] 

Chris Nauroth commented on HDFS-6287:
-

Hi, Colin.  Thanks for posting this.  Did you find that you needed to use SSE 
to get the addition fast enough so that the benchmark highlights read 
throughput instead of sum computation?  IOW, could we potentially simplify this 
patch to not use SSE at all and still have a valid benchmark?

I think it would be helpful to add a comment with a high-level summary of what 
vecsum does, maybe right before the {{main}}.

I have one minor comment on the code itself so far.  I think you can remove the 
{{hdfsFreeBuilder}} call.  {{hdfsBuilderConnect}} always frees the builder, 
whether it succeeds or fails.  The only time you would need to call 
{{hdfsFreeBuilder}} directly is if you allocated a builder but then never 
attempted to connect with it.  I don't see any way for that to happen in the 
{{libhdfs_data_create}} code.

 Add vecsum test of libhdfs read access times
 

 Key: HDFS-6287
 URL: https://issues.apache.org/jira/browse/HDFS-6287
 Project: Hadoop HDFS
  Issue Type: Test
  Components: libhdfs, test
Affects Versions: 2.5.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HDFS-6282.001.patch, HDFS-6287.002.patch, 
 HDFS-6287.003.patch, HDFS-6287.004.patch


 Add vecsum, a benchmark that tests libhdfs access times.  This includes 
 short-circuit, zero-copy, and standard libhdfs access modes.  It also has a 
 local filesystem mode for comparison.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6295) Add decommissioning state and node state filtering to dfsadmin

2014-05-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6295:
--

Attachment: hdfs-6295-3.patch

Missed updating the existing test since I changed the print format a bit, my b

 Add decommissioning state and node state filtering to dfsadmin
 

 Key: HDFS-6295
 URL: https://issues.apache.org/jira/browse/HDFS-6295
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.4.0
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-6295-1.patch, hdfs-6295-2.patch, hdfs-6295-3.patch


 One of the few admin-friendly ways of viewing the list of decommissioning 
 nodes is via hdfs dfsadmin -report. However, this lists *all* the datanodes 
 on the cluster, which is prohibitive for large clusters, and also requires 
 manual parsing to look at the decom status. It'd be nicer if we could fetch 
 and display only decommissioning nodes (or just live and dead nodes for that 
 matter).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6056) Clean up NFS config settings

2014-05-02 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988269#comment-13988269
 ] 

Aaron T. Myers commented on HDFS-6056:
--

bq. Since nfs is in another project, do we still want to move all the 
configuration keys to hadoop-common and hdfs? Maybe in this jira we can first 
make the conf names more consistent and unified, and consider the moving part 
in a separate jira?

Higher level question here - why is any of this code in Common? It's all to 
support HDFS NFS access, so I don't really see how this could be used by 
YARN/MR, or independently of HDFS. Given that, my suggestion would be to move 
all the NFS-related code into the hadoop-hdfs-nfs project, and put all the 
relevant configs there.

Having several of the configs in Common (and therefore with different prefixes 
than the rest of the configs) only seems like it will cause user confusion, and 
not really serve to help anything, given the above.

 Clean up NFS config settings
 

 Key: HDFS-6056
 URL: https://issues.apache.org/jira/browse/HDFS-6056
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Brandon Li
 Attachments: HDFS-6056.001.patch, HDFS-6056.002.patch


 As discussed on HDFS-6050, there's a few opportunities to improve the config 
 settings related to NFS. This JIRA is to implement those changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-02 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-6325:
--

Summary: Append should fail if the last block has insufficient number of 
replicas  (was: Append should fail if the last block has unsufficient number of 
replicas)

 Append should fail if the last block has insufficient number of replicas
 

 Key: HDFS-6325
 URL: https://issues.apache.org/jira/browse/HDFS-6325
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Konstantin Shvachko

 Currently append() succeeds on a file with the last block that has no 
 replicas. But the subsequent updatePipeline() fails as there are no replicas 
 with the exception Unable to retrieve blocks locations for last block. This 
 leaves the file unclosed, and others can not do anything with it until its 
 lease expires.
 The solution is to check replicas of the last block on the NameNode and fail 
 during append() rather than during updatePipeline().
 How many replicas should be present before NN allows to append? I see two 
 options:
 # min-replication: allow append if the last block is minimally replicated (1 
 by default)
 # full-replication: allow append if the last block is fully replicated (3 by 
 default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6255) fuse_dfs will not adhere to ACL permissions in some cases

2014-05-02 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988286#comment-13988286
 ] 

Colin Patrick McCabe commented on HDFS-6255:


Thanks for looking at this, Chris.  Stephen, can you try again with {{\-
oallow_other}} and confirm that it works?

 fuse_dfs will not adhere to ACL permissions in some cases
 -

 Key: HDFS-6255
 URL: https://issues.apache.org/jira/browse/HDFS-6255
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fuse-dfs
Affects Versions: 3.0.0, 2.4.0
Reporter: Stephen Chu
Assignee: Chris Nauroth

 As hdfs user, I created a directory /tmp/acl_dir/ and set permissions to 700. 
 Then I set a new acl group:jenkins:rwx on /tmp/acl_dir.
 {code}
 jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -getfacl /tmp/acl_dir
 # file: /tmp/acl_dir
 # owner: hdfs
 # group: supergroup
 user::rwx
 group::---
 group:jenkins:rwx
 mask::rwx
 other::---
 {code}
 Through the FsShell, the jenkins user can list /tmp/acl_dir as well as create 
 a file and directory inside.
 {code}
 [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -touchz /tmp/acl_dir/testfile1
 [jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -mkdir /tmp/acl_dir/testdir1
 hdfs dfs -ls /tmp/acl[jenkins@hdfs-vanilla-1 ~]$ hdfs dfs -ls /tmp/acl_dir/
 Found 2 items
 drwxr-xr-x   - jenkins supergroup  0 2014-04-17 19:11 
 /tmp/acl_dir/testdir1
 -rw-r--r--   1 jenkins supergroup  0 2014-04-17 19:11 
 /tmp/acl_dir/testfile1
 [jenkins@hdfs-vanilla-1 ~]$ 
 {code}
 However, as the same jenkins user, when I try to cd into /tmp/acl_dir using a 
 fuse_dfs mount, I get permission denied. Same permission denied when I try to 
 create or list files.
 {code}
 [jenkins@hdfs-vanilla-1 tmp]$ ls -l
 total 16
 drwxrwx--- 4 hdfsnobody 4096 Apr 17 19:11 acl_dir
 drwx-- 2 hdfsnobody 4096 Apr 17 18:30 acl_dir_2
 drwxr-xr-x 3 mapred  nobody 4096 Mar 11 03:53 mapred
 drwxr-xr-x 4 jenkins nobody 4096 Apr 17 07:25 testcli
 -rwx-- 1 hdfsnobody0 Apr  7 17:18 tf1
 [jenkins@hdfs-vanilla-1 tmp]$ cd acl_dir
 bash: cd: acl_dir: Permission denied
 [jenkins@hdfs-vanilla-1 tmp]$ touch acl_dir/testfile2
 touch: cannot touch `acl_dir/testfile2': Permission denied
 [jenkins@hdfs-vanilla-1 tmp]$ mkdir acl_dir/testdir2
 mkdir: cannot create directory `acl_dir/testdir2': Permission denied
 [jenkins@hdfs-vanilla-1 tmp]$ 
 {code}
 The fuse_dfs debug output doesn't show any error for the above operations:
 {code}
 unique: 18, opcode: OPENDIR (27), nodeid: 2, insize: 48
unique: 18, success, outsize: 32
 unique: 19, opcode: READDIR (28), nodeid: 2, insize: 80
 readdir[0] from 0
unique: 19, success, outsize: 312
 unique: 20, opcode: GETATTR (3), nodeid: 2, insize: 56
 getattr /tmp
unique: 20, success, outsize: 120
 unique: 21, opcode: READDIR (28), nodeid: 2, insize: 80
unique: 21, success, outsize: 16
 unique: 22, opcode: RELEASEDIR (29), nodeid: 2, insize: 64
unique: 22, success, outsize: 16
 unique: 23, opcode: GETATTR (3), nodeid: 2, insize: 56
 getattr /tmp
unique: 23, success, outsize: 120
 unique: 24, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 24, success, outsize: 120
 unique: 25, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 25, success, outsize: 120
 unique: 26, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 26, success, outsize: 120
 unique: 27, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 27, success, outsize: 120
 unique: 28, opcode: GETATTR (3), nodeid: 3, insize: 56
 getattr /tmp/acl_dir
unique: 28, success, outsize: 120
 {code}
 In other scenarios, ACL permissions are enforced successfully. For example, 
 as hdfs user I create /tmp/acl_dir_2 and set permissions to 777. I then set 
 the acl user:jenkins:--- on the directory. On the fuse mount, I am not able 
 to ls, mkdir, or touch to that directory as jenkins user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6327) Clean up FSDirectory

2014-05-02 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-6327:


 Summary: Clean up FSDirectory
 Key: HDFS-6327
 URL: https://issues.apache.org/jira/browse/HDFS-6327
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai


This is an umbrella jira that coves the clean up work on the FSDirectory class.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6315) Decouple recording edit logs from FSDirectory

2014-05-02 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6315:
-

Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-6327

 Decouple recording edit logs from FSDirectory
 -

 Key: HDFS-6315
 URL: https://issues.apache.org/jira/browse/HDFS-6315
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6315.000.patch, HDFS-6315.001.patch


 Currently both FSNamesystem and FSDirectory record edit logs. This design 
 requires both FSNamesystem and FSDirectory to be tightly coupled together to 
 implement a durable namespace.
 This jira proposes to separate the responsibility of implementing the 
 namespace and providing durability with edit logs. Specifically, FSDirectory 
 implements the namespace (which should have no edit log operations), and 
 FSNamesystem implement durability by recording the edit logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-02 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-6193:


Attachment: HDFS-6193-branch-2.4.v02.patch

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch, 
 HDFS-6193-branch-2.4.v02.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6328) Simplify code in FSDirectory

2014-05-02 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-6328:


 Summary: Simplify code in FSDirectory
 Key: HDFS-6328
 URL: https://issues.apache.org/jira/browse/HDFS-6328
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


This jira proposes:

# Cleaning up dead code in FSDirectory.
# Simplify the control flows that IntelliJ flags as warnings.
# Move functions related to resolving paths into one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6328) Simplify code in FSDirectory

2014-05-02 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6328:
-

Attachment: HDFS-6328.000.patch

 Simplify code in FSDirectory
 

 Key: HDFS-6328
 URL: https://issues.apache.org/jira/browse/HDFS-6328
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6328.000.patch


 This jira proposes:
 # Cleaning up dead code in FSDirectory.
 # Simplify the control flows that IntelliJ flags as warnings.
 # Move functions related to resolving paths into one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6328) Simplify code in FSDirectory

2014-05-02 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6328:
-

Status: Patch Available  (was: Open)

 Simplify code in FSDirectory
 

 Key: HDFS-6328
 URL: https://issues.apache.org/jira/browse/HDFS-6328
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6328.000.patch


 This jira proposes:
 # Cleaning up dead code in FSDirectory.
 # Simplify the control flows that IntelliJ flags as warnings.
 # Move functions related to resolving paths into one place.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-02 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-6325:
---

Attachment: appendTest.patch

Attaching patch to reproduce issue. This can reproduced on a MiniDFSCluster of 
1 NameNode and 1 DataNode.

Once the environment is in the state of NameNode having zero block locations of 
some file X and out of SafeMode the following events will happen:
The first append to X will fail with 'unable to retrieve last block of file X'.
Subsequent appends will fail with AlreadyBeingCreatedException until lease 
recovery occurs.

 Append should fail if the last block has insufficient number of replicas
 

 Key: HDFS-6325
 URL: https://issues.apache.org/jira/browse/HDFS-6325
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Konstantin Shvachko
 Attachments: appendTest.patch


 Currently append() succeeds on a file with the last block that has no 
 replicas. But the subsequent updatePipeline() fails as there are no replicas 
 with the exception Unable to retrieve blocks locations for last block. This 
 leaves the file unclosed, and others can not do anything with it until its 
 lease expires.
 The solution is to check replicas of the last block on the NameNode and fail 
 during append() rather than during updatePipeline().
 How many replicas should be present before NN allows to append? I see two 
 options:
 # min-replication: allow append if the last block is minimally replicated (1 
 by default)
 # full-replication: allow append if the last block is fully replicated (3 by 
 default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6325) Append should fail if the last block has insufficient number of replicas

2014-05-02 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov reassigned HDFS-6325:
--

Assignee: Plamen Jeliazkov

 Append should fail if the last block has insufficient number of replicas
 

 Key: HDFS-6325
 URL: https://issues.apache.org/jira/browse/HDFS-6325
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.2.0
Reporter: Konstantin Shvachko
Assignee: Plamen Jeliazkov
 Attachments: appendTest.patch


 Currently append() succeeds on a file with the last block that has no 
 replicas. But the subsequent updatePipeline() fails as there are no replicas 
 with the exception Unable to retrieve blocks locations for last block. This 
 leaves the file unclosed, and others can not do anything with it until its 
 lease expires.
 The solution is to check replicas of the last block on the NameNode and fail 
 during append() rather than during updatePipeline().
 How many replicas should be present before NN allows to append? I see two 
 options:
 # min-replication: allow append if the last block is minimally replicated (1 
 by default)
 # full-replication: allow append if the last block is fully replicated (3 by 
 default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6293) Issues with OIV processing PB-based fsimages

2014-05-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988321#comment-13988321
 ] 

Suresh Srinivas commented on HDFS-6293:
---

Here is the summary of a quick call I had with [~nroberts], [~kihwal], and 
[~wheat9].

Requirements for the tool:
- It should be able to print a consistent file system information. This rules 
out just doing ls -r from standby (lets assume standby support reads), a 
directory should not appear twice due to renames.
- The tool should print hierarchical namespace information to avoid having to 
use a process without a lot of memory to consume the information.

Here is the proposal:
- Add a flag (turned off by default) to print hierarchical namespace after 
checkpointing is complete in a configurable directory location
- This information will only be printed in the standby namenode
- Last configurable N number of such namespace information files will be 
retained

We did consider printing this information as protobuf. But printing large 
hierarchical information is not straightforward and takes time. In the interest 
of time, we will print this in Json or text (let me know what you think).

In future, we can have the output format of the tool configurable, possibly to 
protobuf. This tool can nicely develop in the future into including other stats 
related to namespace. [~kihwal], and [~wheat9], let me know if I got this right.

 Issues with OIV processing PB-based fsimages
 

 Key: HDFS-6293
 URL: https://issues.apache.org/jira/browse/HDFS-6293
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Priority: Blocker
 Attachments: Heap Histogram.html


 There are issues with OIV when processing fsimages in protobuf. 
 Due to the internal layout changes introduced by the protobuf-based fsimage, 
 OIV consumes excessive amount of memory.  We have tested with a fsimage with 
 about 140M files/directories. The peak heap usage when processing this image 
 in pre-protobuf (i.e. pre-2.4.0) format was about 350MB.  After converting 
 the image to the protobuf format on 2.4.0, OIV would OOM even with 80GB of 
 heap (max new size was 1GB).  It should be possible to process any image with 
 the default heap size of 1.5GB.
 Another issue is the complete change of format/content in OIV's XML output.  
 I also noticed that the secret manager section has no tokens while there were 
 unexpired tokens in the original image (pre-2.4.0).  I did not check whether 
 they were also missing in the new pb fsimage.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6056) Clean up NFS config settings

2014-05-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988322#comment-13988322
 ] 

Brandon Li commented on HDFS-6056:
--

I am working on the new patch here, which will move most of the NFS related 
configurations into hadoop-hdfs-nfs project. In term of the code organization, 
the code in HDFS project is a specific NFS implementation only for HDFS. The 
code in Common is mostly ONCRPC and NFS protocol spec related class 
definitions(along with file system independent utilities), and can be used to 
implement NFS access to different Hadoop compatible file systems. 


 Clean up NFS config settings
 

 Key: HDFS-6056
 URL: https://issues.apache.org/jira/browse/HDFS-6056
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Brandon Li
 Attachments: HDFS-6056.001.patch, HDFS-6056.002.patch


 As discussed on HDFS-6050, there's a few opportunities to improve the config 
 settings related to NFS. This JIRA is to implement those changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988327#comment-13988327
 ] 

Hadoop QA commented on HDFS-6193:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12643130/HDFS-6193-branch-2.4.v02.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6799//console

This message is automatically generated.

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch, 
 HDFS-6193-branch-2.4.v02.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6056) Clean up NFS config settings

2014-05-02 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988336#comment-13988336
 ] 

Aaron T. Myers commented on HDFS-6056:
--

bq. In term of the code organization, the code in HDFS project is a specific 
NFS implementation only for HDFS. The code in Common is mostly ONCRPC and NFS 
protocol spec related class definitions(along with file system independent 
utilities), and can be used to implement NFS access to different Hadoop 
compatible file systems.

Right, but is anything going to use ONCRPC besides NFS? Are there any plans to 
create an NFS Gateway that works with other Hadoop FileSystem implementations? 
If the answers to both are no then seems like all of this could reasonably be 
moved into the hadoop-hdfs-nfs project.

 Clean up NFS config settings
 

 Key: HDFS-6056
 URL: https://issues.apache.org/jira/browse/HDFS-6056
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Brandon Li
 Attachments: HDFS-6056.001.patch, HDFS-6056.002.patch


 As discussed on HDFS-6050, there's a few opportunities to improve the config 
 settings related to NFS. This JIRA is to implement those changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6317) Add snapshot quota

2014-05-02 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988375#comment-13988375
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6317:
---

If you find snapshot quota useful, I have no problem adding it.  BTW, there is 
already a snapshotQuota field in INodeDirectorySnapshottable.  We may simply 
add a command to set the value of it for setting snapshot quota.

 Add snapshot quota
 --

 Key: HDFS-6317
 URL: https://issues.apache.org/jira/browse/HDFS-6317
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Alex Shafer

 Either allow the 65k snapshot limit to be set with a configuration option  or 
 add a per-directory snapshot quota settable with the `hdfs dfsadmin` CLI and 
 viewable by appending fields to `hdfs dfs -count -q` output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6329) WebHdfs does not work if HA is enabled on NN but logical URI is not configured.

2014-05-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-6329:


Assignee: Kihwal Lee

 WebHdfs does not work if HA is enabled on NN but logical URI is not 
 configured.
 ---

 Key: HDFS-6329
 URL: https://issues.apache.org/jira/browse/HDFS-6329
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Blocker

 After HDFS-6100, namenode unconditionally puts the logical name (name service 
 id) as the token service when redirecting webhdfs requests to datanodes, if 
 it detects HA.
 For HA configurations with no client-side failover proxy provider (e.g. IP 
 failover), webhdfs does not work since the clients do not use logical name.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6329) WebHdfs does not work if HA is enabled on NN but logical URI is not configured.

2014-05-02 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-6329:


 Summary: WebHdfs does not work if HA is enabled on NN but logical 
URI is not configured.
 Key: HDFS-6329
 URL: https://issues.apache.org/jira/browse/HDFS-6329
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Priority: Blocker


After HDFS-6100, namenode unconditionally puts the logical name (name service 
id) as the token service when redirecting webhdfs requests to datanodes, if it 
detects HA.

For HA configurations with no client-side failover proxy provider (e.g. IP 
failover), webhdfs does not work since the clients do not use logical name.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6330) Move mkdir() to FSNamesystem

2014-05-02 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-6330:


 Summary: Move mkdir() to FSNamesystem
 Key: HDFS-6330
 URL: https://issues.apache.org/jira/browse/HDFS-6330
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


Currently mkdir() automatically creates all ancestors for a directory. This is 
implemented in FSDirectory, by calling unprotectedMkdir() along the path. This 
jira proposes to move the function to FSNamesystem to simplify the primitive 
that FSDirectory needs to provide.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6330) Move mkdir() to FSNamesystem

2014-05-02 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6330:
-

Attachment: HDFS-6330.000.patch

 Move mkdir() to FSNamesystem
 

 Key: HDFS-6330
 URL: https://issues.apache.org/jira/browse/HDFS-6330
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-6330.000.patch


 Currently mkdir() automatically creates all ancestors for a directory. This 
 is implemented in FSDirectory, by calling unprotectedMkdir() along the path. 
 This jira proposes to move the function to FSNamesystem to simplify the 
 primitive that FSDirectory needs to provide.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-2856) Fix block protocol so that Datanodes don't require root or jsvc

2014-05-02 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988389#comment-13988389
 ] 

Chris Nauroth commented on HDFS-2856:
-

I think we can achieve compatibility on the 2.x line by having the client 
decide the correct protocol.  The client can make this decision based on 
observing a few things in its runtime environment:
# Datanode address port - We know that existing secured data nodes are on a 
privileged port, and future secured data nodes that don't start as root will be 
on a non-privileged port.
# {{dfs.data.transfer.protection}} - I propose adding this as a new 
configuration property for setting the desired SASL QOP on 
{{DataTransferProtocol}}.  Its values would have the same syntax as the 
existing {{hadoop.rpc.protection}} property.
# {{dfs.encrypt.data.transfer}} - We must maintain the existing behavior for 
deployments that have turned this on.  In addition to using SASL with the 
auth-conf QOP, this property also requires use of an NN-issued encryption key 
and imposes strict enforcement that all connections must be encrypted.  
Effectively, this property must supersede {{dfs.data.transfer.protection}} and 
cause rejection of SASL attempts that use any QOP other than auth-conf.

Using that information, pseudo-code for protocol selection in the client would 
be:
{code}
if security is on
  if datanode port  1024
if dfs.encrypt.data.transfer is on
  use encrypted SASL handshake (HDFS-3637)
else
  do not use SASL
  else
if dfs.encrypt.data.transfer is on
  use encrypted SASL handshake (HDFS-3637)
else if dfs.data.transfer.protection defined
  use general SASL handshake (HDFS-2856)
else
  error - secured connection on non-privileged port without SASL not 
possible
else
  do not use SASL
{code}

From an upgrade perspective, existing deployments that don't mind sticking 
with a privileged port can just keep running as usual, because the protocol 
would keep working the same way it works today.  For existing deployments that 
want to stop using a privileged port and switch to a non-privileged port, it's 
more complex.  First, they'll need to deploy the code update everywhere.  
Then, they'll need to restart datanodes to pick up 2 configuration changes 
simultaneously: 1) switch the port number and 2) set 
{{dfs.data.transfer.protection}}.  While this is happening, you could have a 
mix of datanodes in the cluster running in different modes: some with a 
privileged port and some with a non-privileged port.  This is OK, because the 
client-side logic above knows how to negotiate the correct protocol on a 
per-DN basis.

One thing that would be impossible under this scheme is using a privileged port 
in combination with the new SASL handshake.  The whole motivation for this 
change is to prevent the need for root access though, so I think this is an 
acceptable limitation.

The most recent version of the design document talks about upgrading the 
{{DATA_TRANSFER_VERSION}}.  I now believe this isn't necessary.  Old clients 
can keep using the existing protocol version.  New clients can trigger the new 
behavior based on {{dfs.data.transfer.protection}}, so a new protocol version 
isn't necessary.  I need to refresh the design doc.

I believe all of the above fits into our compatibility policies.

 Fix block protocol so that Datanodes don't require root or jsvc
 ---

 Key: HDFS-2856
 URL: https://issues.apache.org/jira/browse/HDFS-2856
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, security
Reporter: Owen O'Malley
Assignee: Chris Nauroth
 Attachments: Datanode-Security-Design.pdf, 
 Datanode-Security-Design.pdf, Datanode-Security-Design.pdf, 
 HDFS-2856.prototype.patch


 Since we send the block tokens unencrypted to the datanode, we currently 
 start the datanode as root using jsvc and get a secure ( 1024) port.
 If we have the datanode generate a nonce and send it on the connection and 
 the sends an hmac of the nonce back instead of the block token it won't 
 reveal any secrets. Thus, we wouldn't require a secure port and would not 
 require root or jsvc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6293) Issues with OIV processing PB-based fsimages

2014-05-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988414#comment-13988414
 ] 

Andrew Wang commented on HDFS-6293:
---

Hey Suresh,

This plan sounds generally good to me, thanks for working this out. I talked to 
our internal users, and had a few questions/comments.

- PB would be preferable to JSON. I'd be interested to hear your reasoning why 
JSON is significantly easier; I figured since we already have PB in the build 
and experience using it, it wouldn't be that much work.
- Can we provide some kind of REST API for fetching this extra listing file? 
This is preferable to manually finding the file and doing scp.
- What kinds of atomicity guarantees are there between the fsimage and this 
listing? We'd like to be able to take the listing and replay the edit log on 
top. Including the txid in the listing is also important for this work.
- Will this also be done by other saveNamespaces besides checkpointing (i.e. 
-saveNamespace as well as at startup)?

I'd also appreciate if you posted any further call-ins to this JIRA, since we'd 
like to be included in the future. Thanks!

 Issues with OIV processing PB-based fsimages
 

 Key: HDFS-6293
 URL: https://issues.apache.org/jira/browse/HDFS-6293
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Priority: Blocker
 Attachments: Heap Histogram.html


 There are issues with OIV when processing fsimages in protobuf. 
 Due to the internal layout changes introduced by the protobuf-based fsimage, 
 OIV consumes excessive amount of memory.  We have tested with a fsimage with 
 about 140M files/directories. The peak heap usage when processing this image 
 in pre-protobuf (i.e. pre-2.4.0) format was about 350MB.  After converting 
 the image to the protobuf format on 2.4.0, OIV would OOM even with 80GB of 
 heap (max new size was 1GB).  It should be possible to process any image with 
 the default heap size of 1.5GB.
 Another issue is the complete change of format/content in OIV's XML output.  
 I also noticed that the secret manager section has no tokens while there were 
 unexpired tokens in the original image (pre-2.4.0).  I did not check whether 
 they were also missing in the new pb fsimage.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6193) HftpFileSystem open should throw FileNotFoundException for non-existing paths

2014-05-02 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988413#comment-13988413
 ] 

Tsuyoshi OZAWA commented on HDFS-6193:
--

Thank you for updating! +1 for the patch(non-binding).
* Compilation works correctly.
* Confirmed that WebHdfsFileSystem.open() and HftpFileSystem.open() throw 
FileNotFoundException when files are missing. Test cases covers it.

 HftpFileSystem open should throw FileNotFoundException for non-existing paths
 -

 Key: HDFS-6193
 URL: https://issues.apache.org/jira/browse/HDFS-6193
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
 Attachments: HDFS-6193-branch-2.4.0.v01.patch, 
 HDFS-6193-branch-2.4.v02.patch


 WebHdfsFileSystem.open and HftpFileSystem.open incorrectly handles 
 non-existing paths. 
 - 'open', does not really open anything, i.e., it does not contact the 
 server, and therefore cannot discover FileNotFound, it's deferred until next 
 read. It's counterintuitive and not how local FS or HDFS work. In POSIX you 
 get ENOENT on open. 
 [LzoInputFormat.getSplits|https://github.com/kevinweil/elephant-bird/blob/master/core/src/main/java/com/twitter/elephantbird/mapreduce/input/LzoInputFormat.java]
  is an example of the code that's broken because of this.
 - On the server side, FileDataServlet incorrectly sends SC_BAD_REQUEST 
 instead of SC_NOT_FOUND for non-exitsing paths



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6056) Clean up NFS config settings

2014-05-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988417#comment-13988417
 ] 

Brandon Li commented on HDFS-6056:
--

The RPC  and XDR code could be extended/refactored for MSRPC and thus CIFS use.
I know of some ideas of NFS access to blob store based Hadoop file system.

 Clean up NFS config settings
 

 Key: HDFS-6056
 URL: https://issues.apache.org/jira/browse/HDFS-6056
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Brandon Li
 Attachments: HDFS-6056.001.patch, HDFS-6056.002.patch


 As discussed on HDFS-6050, there's a few opportunities to improve the config 
 settings related to NFS. This JIRA is to implement those changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6056) Clean up NFS config settings

2014-05-02 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988420#comment-13988420
 ] 

Aaron T. Myers commented on HDFS-6056:
--

OK, I'm still a little skeptical that we'll see any actual use of that code 
outside of the HDFS NFS Gateway, and if we do we could always move it back, but 
up to you.

 Clean up NFS config settings
 

 Key: HDFS-6056
 URL: https://issues.apache.org/jira/browse/HDFS-6056
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Brandon Li
 Attachments: HDFS-6056.001.patch, HDFS-6056.002.patch


 As discussed on HDFS-6050, there's a few opportunities to improve the config 
 settings related to NFS. This JIRA is to implement those changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6293) Issues with OIV processing PB-based fsimages

2014-05-02 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988435#comment-13988435
 ] 

Suresh Srinivas commented on HDFS-6293:
---

bq. PB would be preferable to JSON. I'd be interested to hear your reasoning 
why JSON is significantly easier; I figured since we already have PB in the 
build and experience using it, it wouldn't be that much work.
PB implementation for large number of objects as an array has read side issues 
and requires designing the protobuf more carefully and investment of time. 
[~wheat9] understands this better and can answer (or you can see the structure 
of current fsimage proto, where this is considered). If you want to pursue that 
direction you are welcome. You can add that by adding a new configuration for 
configuring output format. Doing it in JSON gives us a quick solution and is 
sufficient for use cases we are looking for.

bq. Can we provide some kind of REST API for fetching this extra listing file? 
This is preferable to manually finding the file and doing scp.
Good idea. Lets do it in another jira. Please create a related jira.

bq. What kinds of atomicity guarantees are there between the fsimage and this 
listing? We'd like to be able to take the listing and replay the edit log on 
top. Including the txid in the listing is also important for this work.
Not sure what your question means. This report has the same state as fsimage, 
given it is done right after the checkpoint. The printed report would include 
transaction id information.

bq. Will this also be done by other saveNamespaces besides checkpointing (i.e. 
-saveNamespace as well as at startup)?
Not sure if that is necessary. If it is, we can certainly add that in another 
jira. One thing that I should have mentioned is, currently this file exists 
only in standby and will not be shipped to active.

bq. I'd also appreciate if you posted any further call-ins to this JIRA
Certainly, where possible. But you have all the information in the jira and 
have an opportunity to discuss it, right?

 Issues with OIV processing PB-based fsimages
 

 Key: HDFS-6293
 URL: https://issues.apache.org/jira/browse/HDFS-6293
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Priority: Blocker
 Attachments: Heap Histogram.html


 There are issues with OIV when processing fsimages in protobuf. 
 Due to the internal layout changes introduced by the protobuf-based fsimage, 
 OIV consumes excessive amount of memory.  We have tested with a fsimage with 
 about 140M files/directories. The peak heap usage when processing this image 
 in pre-protobuf (i.e. pre-2.4.0) format was about 350MB.  After converting 
 the image to the protobuf format on 2.4.0, OIV would OOM even with 80GB of 
 heap (max new size was 1GB).  It should be possible to process any image with 
 the default heap size of 1.5GB.
 Another issue is the complete change of format/content in OIV's XML output.  
 I also noticed that the secret manager section has no tokens while there were 
 unexpired tokens in the original image (pre-2.4.0).  I did not check whether 
 they were also missing in the new pb fsimage.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6056) Clean up NFS config settings

2014-05-02 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13988442#comment-13988442
 ] 

Brandon Li commented on HDFS-6056:
--

Thanks Aaron. 
I will upload a new patch to consolidate the related configurations. 

 Clean up NFS config settings
 

 Key: HDFS-6056
 URL: https://issues.apache.org/jira/browse/HDFS-6056
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.3.0
Reporter: Aaron T. Myers
Assignee: Brandon Li
 Attachments: HDFS-6056.001.patch, HDFS-6056.002.patch


 As discussed on HDFS-6050, there's a few opportunities to improve the config 
 settings related to NFS. This JIRA is to implement those changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6331) ClientProtocol#setXattr should not be annotated idempotent

2014-05-02 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-6331:
-

 Summary: ClientProtocol#setXattr should not be annotated idempotent
 Key: HDFS-6331
 URL: https://issues.apache.org/jira/browse/HDFS-6331
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS XAttrs (HDFS-2006)
Reporter: Andrew Wang


ClientProtocol#setXAttr is annotated @Idempotent, but this is incorrect since 
subsequent retries need to throw different exceptions based on the passed flags 
(e.g. CREATE, REPLACE).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >