[jira] [Commented] (HDFS-6741) Improve permission denied message when FSPermissionChecker#checkOwner fails

2014-07-24 Thread Stephen Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072859#comment-14072859
 ] 

Stephen Chu commented on HDFS-6741:
---

No unit tests were added because this is small change to an exception message.

The failing tests are not related to this change. I locally re-ran TestWebHDFS 
and TestPipelinesFailover multiple times successfully to double-check.

 Improve permission denied message when FSPermissionChecker#checkOwner fails
 ---

 Key: HDFS-6741
 URL: https://issues.apache.org/jira/browse/HDFS-6741
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor
 Attachments: HDFS-6741.1.patch


 Currently, FSPermissionChecker#checkOwner throws an AccessControlException 
 with a simple Permission denied message.
 When users try to set an ACL without ownership permissions, they'll see 
 something like:
 {code}
 [schu@hdfs-vanilla-1 hadoop]$ hdfs dfs -setfacl -m user:schu:--- /tmp
 setfacl: Permission denied
 {code}
 It'd be helpful if the message had an explanation why the permission was 
 denied to avoid confusion for users who aren't familiar with permissions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6744) Improve decommissioning nodes and dead nodes access on the new NN webUI

2014-07-24 Thread Ming Ma (JIRA)
Ming Ma created HDFS-6744:
-

 Summary: Improve decommissioning nodes and dead nodes access on 
the new NN webUI
 Key: HDFS-6744
 URL: https://issues.apache.org/jira/browse/HDFS-6744
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma


The new NN webUI lists live node at the top of the page, followed by dead node 
and decommissioning node. From admins point of view:

1. Decommissioning nodes and dead nodes are more interesting. It is better to 
move decommissioning nodes to the top of the page, followed by dead nodes and 
decommissioning nodes.
2. To find decommissioning nodes or dead nodes, the whole page that includes 
all nodes needs to be loaded. That could take some time for big clusters.

The legacy web UI filters out the type of nodes dynamically. That seems to work 
well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6657) Remove link to 'Legacy UI' in trunk's Namenode UI

2014-07-24 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-6657:


Attachment: HDFS-6657.patch

Updated patch with above comments. Please review

 Remove link to 'Legacy UI' in trunk's Namenode UI
 -

 Key: HDFS-6657
 URL: https://issues.apache.org/jira/browse/HDFS-6657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Attachments: HDFS-6657.patch, HDFS-6657.patch


 Link to 'Legacy UI' provided on namenode's UI.
 Since in trunk, all jsp pages are removed, these links will not work. can be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6114) Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr

2014-07-24 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072863#comment-14072863
 ] 

Vinayakumar B commented on HDFS-6114:
-

Thanks a lot [~cmccabe] for reviews and commit.

 Block Scan log rolling will never happen if blocks written continuously 
 leading to huge size of dncp_block_verification.log.curr
 

 Key: HDFS-6114
 URL: https://issues.apache.org/jira/browse/HDFS-6114
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.3.0, 2.4.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Fix For: 2.6.0

 Attachments: HDFS-6114.patch, HDFS-6114.patch, HDFS-6114.patch, 
 HDFS-6114.patch


 1. {{BlockPoolSliceScanner#scan()}} will not return until all the blocks are 
 scanned. 
 2. If the blocks (with size in several MBs) to datanode are written 
 continuously 
 then one iteration of {{BlockPoolSliceScanner#scan()}} will be continously 
 scanning the blocks
 3. These blocks will be deleted after some time (enough to get block scanned)
 4. As Block Scanning is throttled, So verification of all blocks will take so 
 much time.
 5. Rolling will never happen, so even though the total number of blocks in 
 datanode doesn't increases, entries ( which contains stale entries of deleted 
 blocks) in *dncp_block_verification.log.curr* continuously increases leading 
 to huge size.
 In one of our env, it grown more than 1TB where total number of blocks were 
 only ~45k.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6745) Display the list of very-under-replicated blocks as well as the files on NN webUI

2014-07-24 Thread Ming Ma (JIRA)
Ming Ma created HDFS-6745:
-

 Summary: Display the list of very-under-replicated blocks as 
well as the files on NN webUI
 Key: HDFS-6745
 URL: https://issues.apache.org/jira/browse/HDFS-6745
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma


Sometimes admins want to know the list of very-under-replicated blocks before 
major actions such as decommission; as these blocks are more likely to turn 
into missing blocks. very-under-replicated blocks  are those blocks with live 
replica count of 1 and replicator factor of = 3.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6746) Support datanode list pagination and filtering for big clusters on NN webUI

2014-07-24 Thread Ming Ma (JIRA)
Ming Ma created HDFS-6746:
-

 Summary: Support datanode list pagination and filtering for big 
clusters on NN webUI
 Key: HDFS-6746
 URL: https://issues.apache.org/jira/browse/HDFS-6746
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma


This isn't a major issue yet. Still it might be good to add support for 
pagination at some point and maybe some filtering. For example, that is useful 
to filter out live nodes that belong to the same rack.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6747) Display the most recent GC info on NN webUI

2014-07-24 Thread Ming Ma (JIRA)
Ming Ma created HDFS-6747:
-

 Summary: Display the most recent GC info on NN webUI
 Key: HDFS-6747
 URL: https://issues.apache.org/jira/browse/HDFS-6747
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma


It will be handy if the recent GC information is available on NN webUI. admins 
don't need to dig out GC logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6743) Put IP address into a new column on the new NN webUI

2014-07-24 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-6743:
--

Description: The new NN webUI combines hostname and IP into one column in 
datanode list. It is more convenient for admins if the IP address can be put to 
a separate column, as in the legacy NN webUI.  (was: new NN webUI combines 
hostname and IP into one column in datanode list. It is more convenient for 
admins if the IP address can be put to a separate column, as in the legacy NN 
webUI.)

 Put IP address into a new column on the new NN webUI
 

 Key: HDFS-6743
 URL: https://issues.apache.org/jira/browse/HDFS-6743
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 The new NN webUI combines hostname and IP into one column in datanode list. 
 It is more convenient for admins if the IP address can be put to a separate 
 column, as in the legacy NN webUI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6147) New blocks scanning will be delayed due to issue in BlockPoolSliceScanner#updateBytesToScan(..)

2014-07-24 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-6147:


Attachment: HDFS-6147.patch

rebased and Updated test.
Please review

 New blocks scanning will be delayed due to issue in 
 BlockPoolSliceScanner#updateBytesToScan(..)
 ---

 Key: HDFS-6147
 URL: https://issues.apache.org/jira/browse/HDFS-6147
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-6147.patch, HDFS-6147.patch, HDFS-6147.patch, 
 HDFS-6147.patch, HDFS-6147.patch


 New blocks scanning will be delayed if old blocks deleted after datanode 
 restart.
 Steps:
 1. Write some blocks and wait till all scans over
 2. Restart the datanode
 3. Delete some of the blocks
 4. Write new blocks which are less in size compared to deleted blocks.
 Problem:
 {{BlockPoolSliceScanner#updateBytesToScan(..)}} updates {{bytesLeft}} based 
 on following comparison
 {code}   if (lastScanTime  currentPeriodStart) {
   bytesLeft += len;
 }{code}
 But in {{BlockPoolSliceScanner#assignInitialVerificationTimes()}} 
 {{bytesLeft}} decremented using below comparison
 {code}if (now - entry.verificationTime  scanPeriod) {{code}
 Hence when the old blocks are deleted {{bytesLeft}} going negative.
 new blocks will not be scanned until it becomes positive again.
 So in both places verificationtime should be compared against scanperiod.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6748) Add layout version, NamespaceID, numItemsInTree in XML offlineImageViewer

2014-07-24 Thread Guo Ruijing (JIRA)
Guo Ruijing created HDFS-6748:
-

 Summary: Add layout version, NamespaceID, numItemsInTree in XML 
offlineImageViewer
 Key: HDFS-6748
 URL: https://issues.apache.org/jira/browse/HDFS-6748
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.0.5-alpha
Reporter: Guo Ruijing


Add layout version, NamespaceID, numItemsInTree in XML offlineImageViewer:

existing implementation:
a.  hdfs oiv -p XML -i fsimage_929 -o fsimage.xml
b. cat fsimage.xml

?xml version=1.0?
fsimageNameSection
genstampV11000/genstampV1genstampV21098/genstampV2genstampV1Limit0/genstampV1LimitlastAllocatedBlockId1073741922/lastAllocatedBlockIdtxid929/txid/NameSection
INodeSectionlastInodeId16594/lastInodeIdinodeid16385/idtypeDIRECTORY/typename/namemtime1406180633657/mtimepermissionhdfs:supergroup:rwxr-xr-x/permissionnsquota9223372036854775807/nsquotadsquota-1/dsquota/inode

expected behavior:
layout version, NamespaceID, numItemsInTree are included in fsimage.xml



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6722) Display readable last contact time for dead nodes on NN webUI

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072943#comment-14072943
 ] 

Hadoop QA commented on HDFS-6722:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657531/HDFS-6722-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
  org.apache.hadoop.hdfs.server.balancer.TestBalancer

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7452//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7452//console

This message is automatically generated.

 Display readable last contact time for dead nodes on NN webUI
 -

 Key: HDFS-6722
 URL: https://issues.apache.org/jira/browse/HDFS-6722
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Ming Ma
 Attachments: HDFS-6722-2.patch, HDFS-6722.patch


 For dead node info on NN webUI, admins want to know when the nodes became 
 dead, to troubleshoot missing block, etc. Currently the webUI displays the 
 last contact as the unit of seconds since the last contact. It will be 
 useful to display the info in Date format.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6723) New NN webUI no longer displays decommissioned state for dead node

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072975#comment-14072975
 ] 

Hadoop QA commented on HDFS-6723:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657535/HDFS-6723.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7453//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7453//console

This message is automatically generated.

 New NN webUI no longer displays decommissioned state for dead node
 --

 Key: HDFS-6723
 URL: https://issues.apache.org/jira/browse/HDFS-6723
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Attachments: HDFS-6723.patch


 Somehow the new webUI doesn't show if a given dead node is decommissioned or 
 not. JMX does return the correct info. Perhaps some bug in dfshealth.html?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6657) Remove link to 'Legacy UI' in trunk's Namenode UI

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073012#comment-14073012
 ] 

Hadoop QA commented on HDFS-6657:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657542/HDFS-6657.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7454//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7454//console

This message is automatically generated.

 Remove link to 'Legacy UI' in trunk's Namenode UI
 -

 Key: HDFS-6657
 URL: https://issues.apache.org/jira/browse/HDFS-6657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Attachments: HDFS-6657.patch, HDFS-6657.patch


 Link to 'Legacy UI' provided on namenode's UI.
 Since in trunk, all jsp pages are removed, these links will not work. can be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6657) Remove link to 'Legacy UI' in trunk's Namenode UI

2014-07-24 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073016#comment-14073016
 ] 

Vinayakumar B commented on HDFS-6657:
-

Failures are not related to this patch.

 Remove link to 'Legacy UI' in trunk's Namenode UI
 -

 Key: HDFS-6657
 URL: https://issues.apache.org/jira/browse/HDFS-6657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Attachments: HDFS-6657.patch, HDFS-6657.patch


 Link to 'Legacy UI' provided on namenode's UI.
 Since in trunk, all jsp pages are removed, these links will not work. can be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6147) New blocks scanning will be delayed due to issue in BlockPoolSliceScanner#updateBytesToScan(..)

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073038#comment-14073038
 ] 

Hadoop QA commented on HDFS-6147:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657553/HDFS-6147.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7455//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7455//console

This message is automatically generated.

 New blocks scanning will be delayed due to issue in 
 BlockPoolSliceScanner#updateBytesToScan(..)
 ---

 Key: HDFS-6147
 URL: https://issues.apache.org/jira/browse/HDFS-6147
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-6147.patch, HDFS-6147.patch, HDFS-6147.patch, 
 HDFS-6147.patch, HDFS-6147.patch


 New blocks scanning will be delayed if old blocks deleted after datanode 
 restart.
 Steps:
 1. Write some blocks and wait till all scans over
 2. Restart the datanode
 3. Delete some of the blocks
 4. Write new blocks which are less in size compared to deleted blocks.
 Problem:
 {{BlockPoolSliceScanner#updateBytesToScan(..)}} updates {{bytesLeft}} based 
 on following comparison
 {code}   if (lastScanTime  currentPeriodStart) {
   bytesLeft += len;
 }{code}
 But in {{BlockPoolSliceScanner#assignInitialVerificationTimes()}} 
 {{bytesLeft}} decremented using below comparison
 {code}if (now - entry.verificationTime  scanPeriod) {{code}
 Hence when the old blocks are deleted {{bytesLeft}} going negative.
 new blocks will not be scanned until it becomes positive again.
 So in both places verificationtime should be compared against scanperiod.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073093#comment-14073093
 ] 

Hudson commented on HDFS-6422:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #622 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/622/])
HDFS-6422. getfattr in CLI doesn't throw exception or return non-0 return code 
when xattr doesn't exist. (Charles Lamb via umamahesh) (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1612922)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrNameParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml


 getfattr in CLI doesn't throw exception or return non-0 return code when 
 xattr doesn't exist
 

 Key: HDFS-6422
 URL: https://issues.apache.org/jira/browse/HDFS-6422
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Blocker
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6422.005.patch, HDFS-6422.006.patch, 
 HDFS-6422.007.patch, HDFS-6422.008.patch, HDFS-6422.009.patch, 
 HDFS-6422.010.patch, HDFS-6422.1.patch, HDFS-6422.2.patch, HDFS-6422.3.patch, 
 HDFS-6474.4.patch, editsStored


 If you do
 hdfs dfs -getfattr -n user.blah /foo
 and user.blah doesn't exist, the command prints
 # file: /foo
 and a 0 return code.
 It should print an exception and return a non-0 return code instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6455) NFS: Exception should be added in NFS log for invalid separator in nfs.exports.allowed.hosts

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073097#comment-14073097
 ] 

Hudson commented on HDFS-6455:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #622 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/622/])
HDFS-6455. NFS: Exception should be added in NFS log for invalid separator in 
nfs.exports.allowed.hosts. Contributed by Abhiraj Butala (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1612947)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 NFS: Exception should be added in NFS log for invalid separator in 
 nfs.exports.allowed.hosts
 

 Key: HDFS-6455
 URL: https://issues.apache.org/jira/browse/HDFS-6455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
Assignee: Abhiraj Butala
 Fix For: 2.6.0

 Attachments: HDFS-6455.002.patch, HDFS-6455.patch


 The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
 should be added in nfs log file instead nfs.out file.
 Steps to reproduce:
 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1  ro:host2 
 rw/value/property
 {noformat}
 2. restart NFS server. NFS server fails to start and print exception console.
 {noformat}
 [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
 UserKnownHostsFile=/dev/null host1 sudo su - -c 
 \/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\ hdfs
 starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
   at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
   at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
   at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
   at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 {noformat}
 NFS log does not print any error message. It directly shuts down. 
 {noformat}
 STARTUP_MSG:   java = 1.6.0_31
 /
 2014-05-27 18:47:13,972 INFO  nfs3.Nfs3Base (SignalLogger.java:register(91)) 
 - registered UNIX signal handlers for [TERM, HUP, INT]
 2014-05-27 18:47:14,169 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated user map size:259
 2014-05-27 18:47:14,179 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated group map size:73
 2014-05-27 18:47:14,192 INFO  nfs3.Nfs3Base (StringUtils.java:run(640)) - 
 SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down Nfs3 at 
 {noformat}
 NFS.out file has exception.
 {noformat}
 EPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
 at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
 at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
 at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 ulimit -a for user hdfs
 core file size  (blocks, -c) 409600
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 188893
 max locked memory   (kbytes, -l) unlimited
 max memory size (kbytes, -m) unlimited
 open files  (-n) 32768
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  

[jira] [Commented] (HDFS-6114) Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073099#comment-14073099
 ] 

Hudson commented on HDFS-6114:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #622 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/622/])
HDFS-6114. Block Scan log rolling will never happen if blocks written 
continuously leading to huge size of dncp_block_verification.log.curr 
(vinayakumarb via cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1612943)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java


 Block Scan log rolling will never happen if blocks written continuously 
 leading to huge size of dncp_block_verification.log.curr
 

 Key: HDFS-6114
 URL: https://issues.apache.org/jira/browse/HDFS-6114
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.3.0, 2.4.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Fix For: 2.6.0

 Attachments: HDFS-6114.patch, HDFS-6114.patch, HDFS-6114.patch, 
 HDFS-6114.patch


 1. {{BlockPoolSliceScanner#scan()}} will not return until all the blocks are 
 scanned. 
 2. If the blocks (with size in several MBs) to datanode are written 
 continuously 
 then one iteration of {{BlockPoolSliceScanner#scan()}} will be continously 
 scanning the blocks
 3. These blocks will be deleted after some time (enough to get block scanned)
 4. As Block Scanning is throttled, So verification of all blocks will take so 
 much time.
 5. Rolling will never happen, so even though the total number of blocks in 
 datanode doesn't increases, entries ( which contains stale entries of deleted 
 blocks) in *dncp_block_verification.log.curr* continuously increases leading 
 to huge size.
 In one of our env, it grown more than 1TB where total number of blocks were 
 only ~45k.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6114) Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073210#comment-14073210
 ] 

Hudson commented on HDFS-6114:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1814 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1814/])
HDFS-6114. Block Scan log rolling will never happen if blocks written 
continuously leading to huge size of dncp_block_verification.log.curr 
(vinayakumarb via cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1612943)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java


 Block Scan log rolling will never happen if blocks written continuously 
 leading to huge size of dncp_block_verification.log.curr
 

 Key: HDFS-6114
 URL: https://issues.apache.org/jira/browse/HDFS-6114
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.3.0, 2.4.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Fix For: 2.6.0

 Attachments: HDFS-6114.patch, HDFS-6114.patch, HDFS-6114.patch, 
 HDFS-6114.patch


 1. {{BlockPoolSliceScanner#scan()}} will not return until all the blocks are 
 scanned. 
 2. If the blocks (with size in several MBs) to datanode are written 
 continuously 
 then one iteration of {{BlockPoolSliceScanner#scan()}} will be continously 
 scanning the blocks
 3. These blocks will be deleted after some time (enough to get block scanned)
 4. As Block Scanning is throttled, So verification of all blocks will take so 
 much time.
 5. Rolling will never happen, so even though the total number of blocks in 
 datanode doesn't increases, entries ( which contains stale entries of deleted 
 blocks) in *dncp_block_verification.log.curr* continuously increases leading 
 to huge size.
 In one of our env, it grown more than 1TB where total number of blocks were 
 only ~45k.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6455) NFS: Exception should be added in NFS log for invalid separator in nfs.exports.allowed.hosts

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073208#comment-14073208
 ] 

Hudson commented on HDFS-6455:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1814 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1814/])
HDFS-6455. NFS: Exception should be added in NFS log for invalid separator in 
nfs.exports.allowed.hosts. Contributed by Abhiraj Butala (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1612947)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 NFS: Exception should be added in NFS log for invalid separator in 
 nfs.exports.allowed.hosts
 

 Key: HDFS-6455
 URL: https://issues.apache.org/jira/browse/HDFS-6455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
Assignee: Abhiraj Butala
 Fix For: 2.6.0

 Attachments: HDFS-6455.002.patch, HDFS-6455.patch


 The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
 should be added in nfs log file instead nfs.out file.
 Steps to reproduce:
 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1  ro:host2 
 rw/value/property
 {noformat}
 2. restart NFS server. NFS server fails to start and print exception console.
 {noformat}
 [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
 UserKnownHostsFile=/dev/null host1 sudo su - -c 
 \/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\ hdfs
 starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
   at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
   at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
   at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
   at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 {noformat}
 NFS log does not print any error message. It directly shuts down. 
 {noformat}
 STARTUP_MSG:   java = 1.6.0_31
 /
 2014-05-27 18:47:13,972 INFO  nfs3.Nfs3Base (SignalLogger.java:register(91)) 
 - registered UNIX signal handlers for [TERM, HUP, INT]
 2014-05-27 18:47:14,169 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated user map size:259
 2014-05-27 18:47:14,179 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated group map size:73
 2014-05-27 18:47:14,192 INFO  nfs3.Nfs3Base (StringUtils.java:run(640)) - 
 SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down Nfs3 at 
 {noformat}
 NFS.out file has exception.
 {noformat}
 EPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
 at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
 at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
 at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 ulimit -a for user hdfs
 core file size  (blocks, -c) 409600
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 188893
 max locked memory   (kbytes, -l) unlimited
 max memory size (kbytes, -m) unlimited
 open files  (-n) 32768
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  

[jira] [Commented] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073204#comment-14073204
 ] 

Hudson commented on HDFS-6422:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1814 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1814/])
HDFS-6422. getfattr in CLI doesn't throw exception or return non-0 return code 
when xattr doesn't exist. (Charles Lamb via umamahesh) (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1612922)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrNameParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml


 getfattr in CLI doesn't throw exception or return non-0 return code when 
 xattr doesn't exist
 

 Key: HDFS-6422
 URL: https://issues.apache.org/jira/browse/HDFS-6422
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Blocker
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6422.005.patch, HDFS-6422.006.patch, 
 HDFS-6422.007.patch, HDFS-6422.008.patch, HDFS-6422.009.patch, 
 HDFS-6422.010.patch, HDFS-6422.1.patch, HDFS-6422.2.patch, HDFS-6422.3.patch, 
 HDFS-6474.4.patch, editsStored


 If you do
 hdfs dfs -getfattr -n user.blah /foo
 and user.blah doesn't exist, the command prints
 # file: /foo
 and a 0 return code.
 It should print an exception and return a non-0 return code instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6747) Display the most recent GC info on NN webUI

2014-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073215#comment-14073215
 ] 

Allen Wittenauer commented on HDFS-6747:


Admins don't look at web UIs.

 Display the most recent GC info on NN webUI
 ---

 Key: HDFS-6747
 URL: https://issues.apache.org/jira/browse/HDFS-6747
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 It will be handy if the recent GC information is available on NN webUI. 
 admins don't need to dig out GC logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6745) Display the list of very-under-replicated blocks as well as the files on NN webUI

2014-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073216#comment-14073216
 ] 

Allen Wittenauer commented on HDFS-6745:


This would be better as a CLI command and/or available via the metrics system.

 Display the list of very-under-replicated blocks as well as the files on NN 
 webUI
 ---

 Key: HDFS-6745
 URL: https://issues.apache.org/jira/browse/HDFS-6745
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 Sometimes admins want to know the list of very-under-replicated blocks 
 before major actions such as decommission; as these blocks are more likely to 
 turn into missing blocks. very-under-replicated blocks  are those blocks 
 with live replica count of 1 and replicator factor of = 3.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6743) Put IP address into a new column on the new NN webUI

2014-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073223#comment-14073223
 ] 

Allen Wittenauer commented on HDFS-6743:


Do we even need to list the IP address?  What if we just drop it?  

 Put IP address into a new column on the new NN webUI
 

 Key: HDFS-6743
 URL: https://issues.apache.org/jira/browse/HDFS-6743
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 The new NN webUI combines hostname and IP into one column in datanode list. 
 It is more convenient for admins if the IP address can be put to a separate 
 column, as in the legacy NN webUI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HDFS-6743) Put IP address into a new column on the new NN webUI

2014-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073223#comment-14073223
 ] 

Allen Wittenauer edited comment on HDFS-6743 at 7/24/14 2:11 PM:
-

Do we even need to list the IP address?  What if we just drop it?  Or maybe 
make it a tooltip?


was (Author: aw):
Do we even need to list the IP address?  What if we just drop it?  

 Put IP address into a new column on the new NN webUI
 

 Key: HDFS-6743
 URL: https://issues.apache.org/jira/browse/HDFS-6743
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 The new NN webUI combines hostname and IP into one column in datanode list. 
 It is more convenient for admins if the IP address can be put to a separate 
 column, as in the legacy NN webUI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073262#comment-14073262
 ] 

Hudson commented on HDFS-6422:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1841 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1841/])
HDFS-6422. getfattr in CLI doesn't throw exception or return non-0 return code 
when xattr doesn't exist. (Charles Lamb via umamahesh) (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1612922)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/XAttrNameParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/resources/TestParam.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/editsStored.xml


 getfattr in CLI doesn't throw exception or return non-0 return code when 
 xattr doesn't exist
 

 Key: HDFS-6422
 URL: https://issues.apache.org/jira/browse/HDFS-6422
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Blocker
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6422.005.patch, HDFS-6422.006.patch, 
 HDFS-6422.007.patch, HDFS-6422.008.patch, HDFS-6422.009.patch, 
 HDFS-6422.010.patch, HDFS-6422.1.patch, HDFS-6422.2.patch, HDFS-6422.3.patch, 
 HDFS-6474.4.patch, editsStored


 If you do
 hdfs dfs -getfattr -n user.blah /foo
 and user.blah doesn't exist, the command prints
 # file: /foo
 and a 0 return code.
 It should print an exception and return a non-0 return code instead.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6114) Block Scan log rolling will never happen if blocks written continuously leading to huge size of dncp_block_verification.log.curr

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073268#comment-14073268
 ] 

Hudson commented on HDFS-6114:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1841 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1841/])
HDFS-6114. Block Scan log rolling will never happen if blocks written 
continuously leading to huge size of dncp_block_verification.log.curr 
(vinayakumarb via cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1612943)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceScanner.java


 Block Scan log rolling will never happen if blocks written continuously 
 leading to huge size of dncp_block_verification.log.curr
 

 Key: HDFS-6114
 URL: https://issues.apache.org/jira/browse/HDFS-6114
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.3.0, 2.4.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Critical
 Fix For: 2.6.0

 Attachments: HDFS-6114.patch, HDFS-6114.patch, HDFS-6114.patch, 
 HDFS-6114.patch


 1. {{BlockPoolSliceScanner#scan()}} will not return until all the blocks are 
 scanned. 
 2. If the blocks (with size in several MBs) to datanode are written 
 continuously 
 then one iteration of {{BlockPoolSliceScanner#scan()}} will be continously 
 scanning the blocks
 3. These blocks will be deleted after some time (enough to get block scanned)
 4. As Block Scanning is throttled, So verification of all blocks will take so 
 much time.
 5. Rolling will never happen, so even though the total number of blocks in 
 datanode doesn't increases, entries ( which contains stale entries of deleted 
 blocks) in *dncp_block_verification.log.curr* continuously increases leading 
 to huge size.
 In one of our env, it grown more than 1TB where total number of blocks were 
 only ~45k.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6455) NFS: Exception should be added in NFS log for invalid separator in nfs.exports.allowed.hosts

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073266#comment-14073266
 ] 

Hudson commented on HDFS-6455:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1841 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1841/])
HDFS-6455. NFS: Exception should be added in NFS log for invalid separator in 
nfs.exports.allowed.hosts. Contributed by Abhiraj Butala (brandonli: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1612947)
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/mount/RpcProgramMountd.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 NFS: Exception should be added in NFS log for invalid separator in 
 nfs.exports.allowed.hosts
 

 Key: HDFS-6455
 URL: https://issues.apache.org/jira/browse/HDFS-6455
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Yesha Vora
Assignee: Abhiraj Butala
 Fix For: 2.6.0

 Attachments: HDFS-6455.002.patch, HDFS-6455.patch


 The error for invalid separator in dfs.nfs.exports.allowed.hosts property 
 should be added in nfs log file instead nfs.out file.
 Steps to reproduce:
 1. Pass invalid separator in dfs.nfs.exports.allowed.hosts
 {noformat}
 propertynamedfs.nfs.exports.allowed.hosts/namevaluehost1  ro:host2 
 rw/value/property
 {noformat}
 2. restart NFS server. NFS server fails to start and print exception console.
 {noformat}
 [hrt_qa@host1 hwqe]$ ssh -o StrictHostKeyChecking=no -o 
 UserKnownHostsFile=/dev/null host1 sudo su - -c 
 \/usr/lib/hadoop/sbin/hadoop-daemon.sh start nfs3\ hdfs
 starting nfs3, logging to /tmp/log/hadoop/hdfs/hadoop-hdfs-nfs3-horst1.out
 DEPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
   at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
   at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
   at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
   at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
   at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 {noformat}
 NFS log does not print any error message. It directly shuts down. 
 {noformat}
 STARTUP_MSG:   java = 1.6.0_31
 /
 2014-05-27 18:47:13,972 INFO  nfs3.Nfs3Base (SignalLogger.java:register(91)) 
 - registered UNIX signal handlers for [TERM, HUP, INT]
 2014-05-27 18:47:14,169 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated user map size:259
 2014-05-27 18:47:14,179 INFO  nfs3.IdUserGroup 
 (IdUserGroup.java:updateMapInternal(159)) - Updated group map size:73
 2014-05-27 18:47:14,192 INFO  nfs3.Nfs3Base (StringUtils.java:run(640)) - 
 SHUTDOWN_MSG:
 /
 SHUTDOWN_MSG: Shutting down Nfs3 at 
 {noformat}
 NFS.out file has exception.
 {noformat}
 EPRECATED: Use of this script to execute hdfs command is deprecated.
 Instead use the hdfs command for it.
 Exception in thread main java.lang.IllegalArgumentException: Incorrectly 
 formatted line 'host1 ro:host2 rw'
 at org.apache.hadoop.nfs.NfsExports.getMatch(NfsExports.java:356)
 at org.apache.hadoop.nfs.NfsExports.init(NfsExports.java:151)
 at org.apache.hadoop.nfs.NfsExports.getInstance(NfsExports.java:54)
 at 
 org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.init(RpcProgramNfs3.java:176)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.init(Nfs3.java:43)
 at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.main(Nfs3.java:59)
 ulimit -a for user hdfs
 core file size  (blocks, -c) 409600
 data seg size   (kbytes, -d) unlimited
 scheduling priority (-e) 0
 file size   (blocks, -f) unlimited
 pending signals (-i) 188893
 max locked memory   (kbytes, -l) unlimited
 max memory size (kbytes, -m) unlimited
 open files  (-n) 32768
 pipe size(512 bytes, -p) 8
 POSIX message queues (bytes, -q) 819200
 real-time priority  (-r) 0
 stack size  (kbytes, -s) 10240
 cpu time   (seconds, -t) unlimited
 max user processes  

[jira] [Commented] (HDFS-6747) Display the most recent GC info on NN webUI

2014-07-24 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073390#comment-14073390
 ] 

Ming Ma commented on HDFS-6747:
---

Thanks, Allen. This and several other jiras actually come from requests of our 
admins. In the cases I have dealt with, people use webUI when there is issue 
with the cluster such as missing blocks or they want to perform certain tasks 
such as decommissioning.

 Display the most recent GC info on NN webUI
 ---

 Key: HDFS-6747
 URL: https://issues.apache.org/jira/browse/HDFS-6747
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 It will be handy if the recent GC information is available on NN webUI. 
 admins don't need to dig out GC logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6747) Display the most recent GC info on NN webUI

2014-07-24 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073399#comment-14073399
 ] 

Esteban Gutierrez commented on HDFS-6747:
-

HDFS-6403 exposes the JvmPauseMonitor metric, does it works for you [~mingma]?


 Display the most recent GC info on NN webUI
 ---

 Key: HDFS-6747
 URL: https://issues.apache.org/jira/browse/HDFS-6747
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 It will be handy if the recent GC information is available on NN webUI. 
 admins don't need to dig out GC logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6657) Remove link to 'Legacy UI' in trunk's Namenode UI

2014-07-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073406#comment-14073406
 ] 

Haohui Mai commented on HDFS-6657:
--

Looks good to me. +1.

 Remove link to 'Legacy UI' in trunk's Namenode UI
 -

 Key: HDFS-6657
 URL: https://issues.apache.org/jira/browse/HDFS-6657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Attachments: HDFS-6657.patch, HDFS-6657.patch


 Link to 'Legacy UI' provided on namenode's UI.
 Since in trunk, all jsp pages are removed, these links will not work. can be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6709) Implement off-heap data structures for NameNode and other HDFS memory optimization

2014-07-24 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073404#comment-14073404
 ] 

Daryn Sharp commented on HDFS-6709:
---

Questions/comments on the advantages:
* I thought RTTI is per class, not instance?  If yes, the savings are 
immaterial?
* Using misaligned access may result in processor incompatibility, impact 
performance, introduces atomicity and CAS problems, concurrent access to 
adjacent misaligned memory in the cache line may be completely unsafe.
* No references, only primitives can be stored off-heap, so how do value types 
(non-boxed primitives, correct?) apply?  Wouldn't the instance managing the 
slab have methods that return the correct primitive?

I think off-heap may be a win in some limited cases, but I'm struggling with 
how it will work in practice.  Here's thoughts for clarification on actual 
application of the technique:
# OO encapsulation and polymorphism are lost?
# We can't store references anymore so we're reduced to primitives?
# Let's say we used to have a class {{Foo}} with instance fields 
{{field1..field4}} of various types.  {{FooManager.get(id)}} returns a {{Foo}} 
instance.  But now a off-heap structure doesn't have any instantiated {{Foo}} 
entries else there is no GC benefit other than smaller instances to compact.
# Does {{FooManager}} instantiate new {{Foo}} instances every time 
{{FooManager.get(id)}} is called?  If yes, it generates a tremendous amount of 
garbage that defeats the GC benefit of going off heap.
# Does {{FooManager}} try to maintain a limited pool of mutable {{Foo}} objects 
for reuse (ex. via a {{Foo#reinitialize(id, f1..f4)}}?  (I've tried this 
technique elsewhere with degraded performance but maybe there's a good way to 
do)
# If no {{Foo}} entries are allowed:
## does {{FooManager}} have methods for every data member that used to be 
encapsulated by {{Foo}}?  Ie. {{FooManager.getField$N(id)}}?  We'll have to 
make N-many calls probably within a critical section?
## Will apis change from {{doSomething(Foo foo, String msg, boolean flag)}} to 
{{doSomething(Long fooId, int fooField1, long fooField2, boolean fooField3, 
long fooField4, String msg, boolean flag)}}?
## If we add another field, do we go back and update all the apis again?


 Implement off-heap data structures for NameNode and other HDFS memory 
 optimization
 --

 Key: HDFS-6709
 URL: https://issues.apache.org/jira/browse/HDFS-6709
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-6709.001.patch


 We should investigate implementing off-heap data structures for NameNode and 
 other HDFS memory optimization.  These data structures could reduce latency 
 by avoiding the long GC times that occur with large Java heaps.  We could 
 also avoid per-object memory overheads and control memory layout a little bit 
 better.  This also would allow us to use the JVM's compressed oops 
 optimization even with really large namespaces, if we could get the Java heap 
 below 32 GB for those cases.  This would provide another performance and 
 memory efficiency boost.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6657) Remove link to 'Legacy UI' in trunk's Namenode UI

2014-07-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6657:
-

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk. Thanks [~vinayrpet] for the contribution.

 Remove link to 'Legacy UI' in trunk's Namenode UI
 -

 Key: HDFS-6657
 URL: https://issues.apache.org/jira/browse/HDFS-6657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-6657.patch, HDFS-6657.patch


 Link to 'Legacy UI' provided on namenode's UI.
 Since in trunk, all jsp pages are removed, these links will not work. can be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6657) Remove link to 'Legacy UI' in trunk's Namenode UI

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073415#comment-14073415
 ] 

Hudson commented on HDFS-6657:
--

FAILURE: Integrated in Hadoop-trunk-Commit #5960 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5960/])
HDFS-6657. Remove link to 'Legacy UI' in trunk's Namenode UI. Contributed by 
Vinayakumar B. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1613195)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/index.html
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary/index.html


 Remove link to 'Legacy UI' in trunk's Namenode UI
 -

 Key: HDFS-6657
 URL: https://issues.apache.org/jira/browse/HDFS-6657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-6657.patch, HDFS-6657.patch


 Link to 'Legacy UI' provided on namenode's UI.
 Since in trunk, all jsp pages are removed, these links will not work. can be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6722) Display readable last contact time for dead nodes on NN webUI

2014-07-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073427#comment-14073427
 ] 

Haohui Mai commented on HDFS-6722:
--

{code}
+
+var HELPERS = {
+  'helper_lastcontact_tostring' : function (chunk, ctx, bodies, params) {
+var value = dust.helpers.tap(params.value, chunk, ctx);
+return chunk.write('' + new Date(Date.now()-1000*Number(value)));
+  }
+};
+
{code}

It might be worthwhile to move this function to dfs-dust.js, and replace 
{{helper_date_tostring}} to {{date_to_string}}, but it's your call.

Other than that the patch looks good to me.

 Display readable last contact time for dead nodes on NN webUI
 -

 Key: HDFS-6722
 URL: https://issues.apache.org/jira/browse/HDFS-6722
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Ming Ma
 Attachments: HDFS-6722-2.patch, HDFS-6722.patch


 For dead node info on NN webUI, admins want to know when the nodes became 
 dead, to troubleshoot missing block, etc. Currently the webUI displays the 
 last contact as the unit of seconds since the last contact. It will be 
 useful to display the info in Date format.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6723) New NN webUI no longer displays decommissioned state for dead node

2014-07-24 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073435#comment-14073435
 ] 

Haohui Mai commented on HDFS-6723:
--

Looks good to me. +1. I'll commit it shortly.

 New NN webUI no longer displays decommissioned state for dead node
 --

 Key: HDFS-6723
 URL: https://issues.apache.org/jira/browse/HDFS-6723
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Attachments: HDFS-6723.patch


 Somehow the new webUI doesn't show if a given dead node is decommissioned or 
 not. JMX does return the correct info. Perhaps some bug in dfshealth.html?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6742) Support sorting datanode list on the new NN webUI

2014-07-24 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073444#comment-14073444
 ] 

Arpit Agarwal commented on HDFS-6742:
-

Hi Ming, I noticed you filed a number of Jiras for NN WebUI enhancements.

If you agree, you could group them as sub-tasks under an umbrella Jira, to make 
them easier to track.

Thanks.

 Support sorting datanode list on the new NN webUI
 -

 Key: HDFS-6742
 URL: https://issues.apache.org/jira/browse/HDFS-6742
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 The legacy webUI allows sorting datanode list based on specific column such 
 as hostname. It is handy for admins can find pattern more quickly, especially 
 for big clusters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5919) FileJournalManager doesn't purge empty and corrupt inprogress edits files

2014-07-24 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073446#comment-14073446
 ] 

Jing Zhao commented on HDFS-5919:
-

The patch looks good to me. +1.
Maybe we can also use this chance to update the java comment in 
{{testRetainExtraLogsLimitedSegments}}? Looks like the existing two comments 
are out of date thus the numbers are not correct.

 FileJournalManager doesn't purge empty and corrupt inprogress edits files
 -

 Key: HDFS-5919
 URL: https://issues.apache.org/jira/browse/HDFS-5919
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-5919.patch, HDFS-5919.patch


 FileJournalManager doesn't purge empty and corrupt inprogress edit files.
 These stale files will be accumulated over time.
 These should be cleared along with the purging of other edit logs



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6723) New NN webUI no longer displays decommissioned state for dead node

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073447#comment-14073447
 ] 

Hudson commented on HDFS-6723:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5962 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5962/])
HDFS-6723. New NN webUI no longer displays decommissioned state for dead node. 
Contributed by Ming Ma. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1613220)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


 New NN webUI no longer displays decommissioned state for dead node
 --

 Key: HDFS-6723
 URL: https://issues.apache.org/jira/browse/HDFS-6723
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Attachments: HDFS-6723.patch


 Somehow the new webUI doesn't show if a given dead node is decommissioned or 
 not. JMX does return the correct info. Perhaps some bug in dfshealth.html?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6723) New NN webUI no longer displays decommissioned state for dead node

2014-07-24 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-6723:
-

   Resolution: Fixed
Fix Version/s: 2.5.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk, branch-2 and branch-2.5. Thanks [~mingma] 
for the contribution.

 New NN webUI no longer displays decommissioned state for dead node
 --

 Key: HDFS-6723
 URL: https://issues.apache.org/jira/browse/HDFS-6723
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
 Fix For: 2.5.0

 Attachments: HDFS-6723.patch


 Somehow the new webUI doesn't show if a given dead node is decommissioned or 
 not. JMX does return the correct info. Perhaps some bug in dfshealth.html?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5919) FileJournalManager doesn't purge empty and corrupt inprogress edits files

2014-07-24 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073462#comment-14073462
 ] 

Uma Maheswara Rao G commented on HDFS-5919:
---

+1 from me as well. Thanks for fixing this Vinay!

 FileJournalManager doesn't purge empty and corrupt inprogress edits files
 -

 Key: HDFS-5919
 URL: https://issues.apache.org/jira/browse/HDFS-5919
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-5919.patch, HDFS-5919.patch


 FileJournalManager doesn't purge empty and corrupt inprogress edit files.
 These stale files will be accumulated over time.
 These should be cleared along with the purging of other edit logs



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6715) webhdfs wont fail over when it gets java.io.IOException: Namenode is in startup mode

2014-07-24 Thread Stephen Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073466#comment-14073466
 ] 

Stephen Chu commented on HDFS-6715:
---

Thank you for fixing this, [~jingzhao]. Changes LGTM. I manually deployed and 
verified. +1 (non-binding).

 webhdfs wont fail over when it gets java.io.IOException: Namenode is in 
 startup mode
 

 Key: HDFS-6715
 URL: https://issues.apache.org/jira/browse/HDFS-6715
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, webhdfs
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Jing Zhao
 Attachments: HDFS-6715.000.patch, HDFS-6715.001.patch


 Noticed in our HA testing when we run MR job with webhdfs file system we some 
 times run into 
 {code}
 2014-04-17 05:08:06,346 INFO [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
 report from attempt_1397710493213_0001_r_08_0: Container killed by the 
 ApplicationMaster.
 Container killed on request. Exit code is 143
 Container exited with a non-zero exit code 143
 2014-04-17 05:08:10,205 ERROR [CommitterEvent Processor #1] 
 org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Could not 
 commit job
 java.io.IOException: Namenode is in startup mode
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6715) webhdfs wont fail over when it gets java.io.IOException: Namenode is in startup mode

2014-07-24 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073473#comment-14073473
 ] 

Suresh Srinivas commented on HDFS-6715:
---

[~jingzhao], thanks for fixing this. [~schu], thanks for testing it.

+1 for the change.

 webhdfs wont fail over when it gets java.io.IOException: Namenode is in 
 startup mode
 

 Key: HDFS-6715
 URL: https://issues.apache.org/jira/browse/HDFS-6715
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, webhdfs
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Jing Zhao
 Attachments: HDFS-6715.000.patch, HDFS-6715.001.patch


 Noticed in our HA testing when we run MR job with webhdfs file system we some 
 times run into 
 {code}
 2014-04-17 05:08:06,346 INFO [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
 report from attempt_1397710493213_0001_r_08_0: Container killed by the 
 ApplicationMaster.
 Container killed on request. Exit code is 143
 Container exited with a non-zero exit code 143
 2014-04-17 05:08:10,205 ERROR [CommitterEvent Processor #1] 
 org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Could not 
 commit job
 java.io.IOException: Namenode is in startup mode
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6715) webhdfs wont fail over when it gets java.io.IOException: Namenode is in startup mode

2014-07-24 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6715:


   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the review, Suresh and [~schu]! I've committed this into trunk and 
branch-2.

 webhdfs wont fail over when it gets java.io.IOException: Namenode is in 
 startup mode
 

 Key: HDFS-6715
 URL: https://issues.apache.org/jira/browse/HDFS-6715
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, webhdfs
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Jing Zhao
 Fix For: 2.6.0

 Attachments: HDFS-6715.000.patch, HDFS-6715.001.patch


 Noticed in our HA testing when we run MR job with webhdfs file system we some 
 times run into 
 {code}
 2014-04-17 05:08:06,346 INFO [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
 report from attempt_1397710493213_0001_r_08_0: Container killed by the 
 ApplicationMaster.
 Container killed on request. Exit code is 143
 Container exited with a non-zero exit code 143
 2014-04-17 05:08:10,205 ERROR [CommitterEvent Processor #1] 
 org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Could not 
 commit job
 java.io.IOException: Namenode is in startup mode
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6715) webhdfs wont fail over when it gets java.io.IOException: Namenode is in startup mode

2014-07-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073491#comment-14073491
 ] 

Hudson commented on HDFS-6715:
--

FAILURE: Integrated in Hadoop-trunk-Commit #5963 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5963/])
HDFS-6715. Webhdfs wont fail over when it gets java.io.IOException: Namenode is 
in startup mode. Contributed by Jing Zhao. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1613237)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFSForHA.java


 webhdfs wont fail over when it gets java.io.IOException: Namenode is in 
 startup mode
 

 Key: HDFS-6715
 URL: https://issues.apache.org/jira/browse/HDFS-6715
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, webhdfs
Affects Versions: 2.2.0
Reporter: Arpit Gupta
Assignee: Jing Zhao
 Fix For: 2.6.0

 Attachments: HDFS-6715.000.patch, HDFS-6715.001.patch


 Noticed in our HA testing when we run MR job with webhdfs file system we some 
 times run into 
 {code}
 2014-04-17 05:08:06,346 INFO [AsyncDispatcher event handler] 
 org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics 
 report from attempt_1397710493213_0001_r_08_0: Container killed by the 
 ApplicationMaster.
 Container killed on request. Exit code is 143
 Container exited with a non-zero exit code 143
 2014-04-17 05:08:10,205 ERROR [CommitterEvent Processor #1] 
 org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Could not 
 commit job
 java.io.IOException: Namenode is in startup mode
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-272) Update startup scripts to start Checkpoint node instead of SecondaryNameNode

2014-07-24 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko resolved HDFS-272.
--

Resolution: Won't Fix

Resolving. It was created when SNN was deprecated. Now that SNN is reinstated 
there is no need to modify the script.

 Update startup scripts to start Checkpoint node instead of SecondaryNameNode
 

 Key: HDFS-272
 URL: https://issues.apache.org/jira/browse/HDFS-272
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Konstantin Shvachko
  Labels: newbie

 Start up script {{start-dfs.sh}} should start Checkpoint node instead of 
 SecondaryNameNode.
 It should provide an option to start Checkpoint or Backup node or secondary.
 The default should be checkpoint.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6709) Implement off-heap data structures for NameNode and other HDFS memory optimization

2014-07-24 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073529#comment-14073529
 ] 

Colin Patrick McCabe commented on HDFS-6709:


bq. I thought RTTI is per class, not instance? If yes, the savings are 
immaterial?

RTTI has to be per-instance.  That is why you can pass around Object instances 
and cast them to whatever you want.  Java has to store this information 
somewhere (think about it).  If Java didn't store this, it would have no way to 
know whether the cast should succeed or not.  Then you would be in the same 
situation as in C, where you can cast something to something else and get 
random garbage bits.

bq. Using misaligned access may result in processor incompatibility, impact 
performance, introduces atomicity and CAS problems, concurrent access to 
adjacent misaligned memory in the cache line may be completely unsafe.

I know about alignment restrictions.  There are easy ways around that problem-- 
instead of getLong you use two getShort calls, etc., depending on the minimum 
alignment you can rely on.  I don't see how CAS or atomicity are relevant, 
since we're not discussing atomic data structures.  The performance benefits of 
storing less data can often cancel out the performance disadvantages of doing 
unaligned access.  It depends on the scenario.

bq. No references, only primitives can be stored off-heap, so how do value 
types (non-boxed primitives, correct?) apply? Wouldn't the instance managing 
the slab have methods that return the correct primitive?

The point is that with control over the layout, you can do better.  I guess a 
more concrete example might help explain this.

bq. OO encapsulation and polymorphism are lost?

Take a look at {{BlockInfo#triplets}}.  How much OO encapsulation do you see in 
an array of Object[], with a special comment above about how to interpret each 
set of three entries?  Most of the places we'd like to use off-heap storage are 
already full of hacks to abuse the Java type system to squeeze in a few extra 
bytes.  Arrays of primitives, arrays of objects, with special conventions are 
routine.

bq. Does FooManager instantiate new Foo instances every time FooManager.get(id) 
is called? If yes, it generates a tremendous amount of garbage that defeats the 
GC benefit of going off heap.

No, because every modern GC uses generational collection.  This means that 
short-lived instances are quickly cleaned up, without any pauses.

The rest of the questions seem to be variants on this one.  Think about it.  
All the code we have in FSNamesystem follows the pattern: lookup inode, do 
something to inode, done with inode.  We can create temporary INode objects and 
they'll never make it to PermGen, since they don't stick around between RPC 
calls.  Even if they somehow did (how?) with a dramatically smaller heap, the 
full GC would no longer be scary.  And we'd get other performance benefits like 
the compressed oops optimizations.  Anyway, the termporary inode objects would 
probably just be a thin objects which contain an offheap memory reference and a 
bunch of getters/setters, to avoid doing a lot of unnecessary serde.

 Implement off-heap data structures for NameNode and other HDFS memory 
 optimization
 --

 Key: HDFS-6709
 URL: https://issues.apache.org/jira/browse/HDFS-6709
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-6709.001.patch


 We should investigate implementing off-heap data structures for NameNode and 
 other HDFS memory optimization.  These data structures could reduce latency 
 by avoiding the long GC times that occur with large Java heaps.  We could 
 also avoid per-object memory overheads and control memory layout a little bit 
 better.  This also would allow us to use the JVM's compressed oops 
 optimization even with really large namespaces, if we could get the Java heap 
 below 32 GB for those cases.  This would provide another performance and 
 memory efficiency boost.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6526) Implement HDFS TtlManager

2014-07-24 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073551#comment-14073551
 ] 

Daryn Sharp commented on HDFS-6526:
---

I see scalability and performance issues (maybe a few bugs) with the ttl 
manager which make it unsuitable for large namesystems.  I wasn't too concerned 
because it's a nice to have feature.  I perked up when folding in the must 
have trash emptier was mentioned in HDFS-6525.

Would you please elaborate on whether you plan to simply have the trash emptier 
and ttl manager run as distinct services in the same adjunct process?  Or do 
you plan on the emptier actually leveraging/relying on ttls?

As a general comment, a feature like this is attractive but may be highly 
dangerous.  You many want to consider means to safeguard against a severely 
skewed system clock, else the ttl manager might go on mass murder spree in the 
filesystem...

 Implement HDFS TtlManager
 -

 Key: HDFS-6526
 URL: https://issues.apache.org/jira/browse/HDFS-6526
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client, namenode
Affects Versions: 2.4.0
Reporter: Zesheng Wu
Assignee: Zesheng Wu
 Attachments: HDFS-6526.1.patch


 This issue is used to track development of HDFS TtlManager, for details see 
 HDFS-6382.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6696) Name node cannot start if the path of a file under construction contains .snapshot

2014-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6696:
--

Attachment: hdfs-6696.003.patch

Same patch without the binary diff. All of these apply fine for me with {{patch 
-p0 -E  thePatch}}, so not sure what's going on.

 Name node cannot start if the path of a file under construction contains 
 .snapshot
 

 Key: HDFS-6696
 URL: https://issues.apache.org/jira/browse/HDFS-6696
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Andrew Wang
Priority: Blocker
 Attachments: hdfs-6696.001.patch, hdfs-6696.002.patch, 
 hdfs-6696.003.patch


 Using {{-renameReserved}} to rename .snapshot in a pre-hdfs-snapshot 
 feature fsimage during upgrade only works, if there is nothing under 
 construction under the renamed directory.  I am not sure whether it takes 
 care of edits containing .snapshot properly.
 The workaround is to identify these directories and rename, then do 
 {{saveNamespace}} before performing upgrade.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6247) Avoid timeouts for replaceBlock() call by sending intermediate responses to Balancer

2014-07-24 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073602#comment-14073602
 ] 

Charles Lamb commented on HDFS-6247:


Hi [~vinayrpet],

I'm curious about why you are using a 5sec heartbeat interval. That seems small 
relative to the timeout on the socket.



 Avoid timeouts for replaceBlock() call by sending intermediate responses to 
 Balancer
 

 Key: HDFS-6247
 URL: https://issues.apache.org/jira/browse/HDFS-6247
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer, datanode
Affects Versions: 2.4.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-6247.patch, HDFS-6247.patch, HDFS-6247.patch, 
 HDFS-6247.patch


 Currently there is no response sent from target Datanode to Balancer for the 
 replaceBlock() calls.
 Since the Block movement for balancing is throttled, complete block movement 
 will take time and this could result in timeout at Balancer, which will be 
 trying to read the status message.
  
 To Avoid this during replaceBlock() call in in progress Datanode  can send 
 IN_PROGRESS status messages to Balancer to avoid timeouts and treat 
 BlockMovement as  failed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6696) Name node cannot start if the path of a file under construction contains .snapshot

2014-07-24 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073603#comment-14073603
 ] 

Jing Zhao commented on HDFS-6696:
-

The patch looks good to me. The new unit tests also pass in my local machine 
after applying the binary changes. +1 pending Jenkins.

 Name node cannot start if the path of a file under construction contains 
 .snapshot
 

 Key: HDFS-6696
 URL: https://issues.apache.org/jira/browse/HDFS-6696
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Andrew Wang
Priority: Blocker
 Attachments: hdfs-6696.001.patch, hdfs-6696.002.patch, 
 hdfs-6696.003.patch


 Using {{-renameReserved}} to rename .snapshot in a pre-hdfs-snapshot 
 feature fsimage during upgrade only works, if there is nothing under 
 construction under the renamed directory.  I am not sure whether it takes 
 care of edits containing .snapshot properly.
 The workaround is to identify these directories and rename, then do 
 {{saveNamespace}} before performing upgrade.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6749) FSNamesystem#getXAttrs and listXAttrs should call resolvePath

2014-07-24 Thread Charles Lamb (JIRA)
Charles Lamb created HDFS-6749:
--

 Summary: FSNamesystem#getXAttrs and listXAttrs should call 
resolvePath
 Key: HDFS-6749
 URL: https://issues.apache.org/jira/browse/HDFS-6749
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb


FSNamesystem#getXAttrs and listXAttrs don't call FSDirectory#resolvePath. They 
should.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6665) Add tests for XAttrs in combination with viewfs

2014-07-24 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HDFS-6665:
--

Target Version/s: 3.0.0, 2.6.0  (was: 2.6.0)
  Status: Patch Available  (was: Open)

 Add tests for XAttrs in combination with viewfs
 ---

 Key: HDFS-6665
 URL: https://issues.apache.org/jira/browse/HDFS-6665
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
 Attachments: HDFS-6665.1.patch


 This is similar to HDFS-5624 (Add tests for ACLs in combination with viewfs)
 We should verify that XAttr operations work properly with viewfs, and that 
 XAttr commands are routed to the correct namenode in a federated deployment.
 Also, we should make sure that the behavior of XAttr commands on internal 
 dirs is consistent with other commands. For example, setPermission will throw 
 the readonly AccessControlException for paths above the root mount entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6665) Add tests for XAttrs in combination with viewfs

2014-07-24 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HDFS-6665:
--

Attachment: HDFS-6665.1.patch

Attaching patch to add two tests that verify XAttrs with ViewFs and 
ViewFileSystem. They verify that the XAttr operations are routed to the correct 
NameNode.

 Add tests for XAttrs in combination with viewfs
 ---

 Key: HDFS-6665
 URL: https://issues.apache.org/jira/browse/HDFS-6665
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
 Attachments: HDFS-6665.1.patch


 This is similar to HDFS-5624 (Add tests for ACLs in combination with viewfs)
 We should verify that XAttr operations work properly with viewfs, and that 
 XAttr commands are routed to the correct namenode in a federated deployment.
 Also, we should make sure that the behavior of XAttr commands on internal 
 dirs is consistent with other commands. For example, setPermission will throw 
 the readonly AccessControlException for paths above the root mount entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6709) Implement off-heap data structures for NameNode and other HDFS memory optimization

2014-07-24 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073677#comment-14073677
 ] 

Andrew Purtell commented on HDFS-6709:
--

bq. No, because every modern GC uses generational collection. This means that 
short-lived instances are quickly cleaned up, without any pauses.

... and modern JVM versions have escape analysis enabled by default. Although 
there are limitations, simple objects that don't escape the local block (like 
Iterators) or the method can be allocated on the stack once native code is 
emitted by the server compiler. No heap allocation happens at all. You can use 
fastdebug JVM builds during dev to learn explicitly what your code is doing in 
this regard.

 Implement off-heap data structures for NameNode and other HDFS memory 
 optimization
 --

 Key: HDFS-6709
 URL: https://issues.apache.org/jira/browse/HDFS-6709
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-6709.001.patch


 We should investigate implementing off-heap data structures for NameNode and 
 other HDFS memory optimization.  These data structures could reduce latency 
 by avoiding the long GC times that occur with large Java heaps.  We could 
 also avoid per-object memory overheads and control memory layout a little bit 
 better.  This also would allow us to use the JVM's compressed oops 
 optimization even with really large namespaces, if we could get the Java heap 
 below 32 GB for those cases.  This would provide another performance and 
 memory efficiency boost.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6749) FSNamesystem#getXAttrs and listXAttrs should call resolvePath

2014-07-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073694#comment-14073694
 ] 

Chris Nauroth commented on HDFS-6749:
-

It looks to me like {{getAclStatus}} has the same problem, and maybe {{concat}} 
also?

 FSNamesystem#getXAttrs and listXAttrs should call resolvePath
 -

 Key: HDFS-6749
 URL: https://issues.apache.org/jira/browse/HDFS-6749
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb

 FSNamesystem#getXAttrs and listXAttrs don't call FSDirectory#resolvePath. 
 They should.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6749) FSNamesystem#getXAttrs and listXAttrs should call resolvePath

2014-07-24 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073698#comment-14073698
 ] 

Charles Lamb commented on HDFS-6749:


concat's comments say that it does not support /.reserved/.inodes so I think 
that omission is by design. However, isFileClosed looks like it too needs 
appropriate calls to resolvePath.


 FSNamesystem#getXAttrs and listXAttrs should call resolvePath
 -

 Key: HDFS-6749
 URL: https://issues.apache.org/jira/browse/HDFS-6749
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb

 FSNamesystem#getXAttrs and listXAttrs don't call FSDirectory#resolvePath. 
 They should.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HDFS-6749) FSNamesystem#getXAttrs and listXAttrs should call resolvePath

2014-07-24 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-6749 started by Charles Lamb.

 FSNamesystem#getXAttrs and listXAttrs should call resolvePath
 -

 Key: HDFS-6749
 URL: https://issues.apache.org/jira/browse/HDFS-6749
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb

 FSNamesystem#getXAttrs and listXAttrs don't call FSDirectory#resolvePath. 
 They should.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6749) FSNamesystem#getXAttrs and listXAttrs should call resolvePath

2014-07-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073699#comment-14073699
 ] 

Chris Nauroth commented on HDFS-6749:
-

Sounds good.  Thanks, Charles.

 FSNamesystem#getXAttrs and listXAttrs should call resolvePath
 -

 Key: HDFS-6749
 URL: https://issues.apache.org/jira/browse/HDFS-6749
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb

 FSNamesystem#getXAttrs and listXAttrs don't call FSDirectory#resolvePath. 
 They should.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6749) FSNamesystem#getXAttrs and listXAttrs should call resolvePath

2014-07-24 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6749:
---

Attachment: HDFS-6749.001.patch

It looks like 4 places total: getXAttrs, listXAttrs, isFileClosed, 
getAclStatus. None of the snapshots methods call resolveHost, which I suspect 
is by design.

Diffs against branch-2 attached.


 FSNamesystem#getXAttrs and listXAttrs should call resolvePath
 -

 Key: HDFS-6749
 URL: https://issues.apache.org/jira/browse/HDFS-6749
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb
 Attachments: HDFS-6749.001.patch


 FSNamesystem#getXAttrs and listXAttrs don't call FSDirectory#resolvePath. 
 They should.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6749) FSNamesystem#getXAttrs and listXAttrs should call resolvePath

2014-07-24 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6749:
---

Status: Patch Available  (was: In Progress)

 FSNamesystem#getXAttrs and listXAttrs should call resolvePath
 -

 Key: HDFS-6749
 URL: https://issues.apache.org/jira/browse/HDFS-6749
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb
 Attachments: HDFS-6749.001.patch


 FSNamesystem#getXAttrs and listXAttrs don't call FSDirectory#resolvePath. 
 They should.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6724) Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream

2014-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6724:
--

Attachment: hdfs-6724.001.patch

Sorry about the delay here, had to do some KeyProvider API improvements along 
the way. Still waiting on HADOOP-10981 to be committed to trunk (the 
EncryptedKeyVersion factory method), but you can look at this combined patch 
for an idea.

 Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream
 -

 Key: HDFS-6724
 URL: https://issues.apache.org/jira/browse/HDFS-6724
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Andrew Wang
 Attachments: hdfs-6724.001.patch


 In DFSClient, we need to decrypt EDEK before creating 
 CryptoInputStream/CryptoOutputStream, currently edek is used directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6737) DFSClient should use IV generated based on the configured CipherSuite with codecs used

2014-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6737:
--

Summary: DFSClient should use IV generated based on the configured 
CipherSuite with codecs used  (was: DFSClinet should use IV generated beased on 
the configured CipherSuite with codecs used)

 DFSClient should use IV generated based on the configured CipherSuite with 
 codecs used
 --

 Key: HDFS-6737
 URL: https://issues.apache.org/jira/browse/HDFS-6737
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HDFS-6737.patch


 Seems like we are using IV as like Encrypted data encryption key iv. But the 
 underlying Codec's cipher suite may expect different iv length. So, we should 
 generate IV from the Coec's cipher suite configured.
 {code}
  final CryptoInputStream cryptoIn =
   new CryptoInputStream(dfsis, CryptoCodec.getInstance(conf, 
   feInfo.getCipherSuite()), 
 feInfo.getEncryptedDataEncryptionKey(),
   feInfo.getIV());
 {code}
 So, instead of using feinfo.getIV(), we should generate like
 {code}
 byte[] iv = new byte[codec.getCipherSuite().getAlgorithmBlockSize()]; 
 codec.generateSecureRandom(iv);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HDFS-6724) Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream

2014-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-6724 started by Andrew Wang.

 Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream
 -

 Key: HDFS-6724
 URL: https://issues.apache.org/jira/browse/HDFS-6724
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Andrew Wang
 Attachments: hdfs-6724.001.patch


 In DFSClient, we need to decrypt EDEK before creating 
 CryptoInputStream/CryptoOutputStream, currently edek is used directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6469) Coordinated replication of the namespace using ConsensusNode

2014-07-24 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073745#comment-14073745
 ] 

Konstantin Boudnik commented on HDFS-6469:
--

Almost as in a couple of months from presentation.

 Coordinated replication of the namespace using ConsensusNode
 

 Key: HDFS-6469
 URL: https://issues.apache.org/jira/browse/HDFS-6469
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Affects Versions: 3.0.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: CNodeDesign.pdf


 This is a proposal to introduce ConsensusNode - an evolution of the NameNode, 
 which enables replication of the namespace on multiple nodes of an HDFS 
 cluster by means of a Coordination Engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6724) Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream

2014-07-24 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073797#comment-14073797
 ] 

Charles Lamb commented on HDFS-6724:


Hi [~andrew.wang],

I only have a few little nits. In general I'm +1, but I'd like to hear what Yi 
has to say.

DFSUtil.java:

{code}
@throws java.io.IOException.
{code}

You don't need java.io. since it's imported.

KeyProviderCryptoExtension.java:

{code}
 * @param encryptedKeyIv   Initialization vector of the encrypted
 * key. The IV of the encryption key used to
 * encrypt the encrypted key is derived from
 * this IV.
{code}

In this comment would it be possible to add the word data as in data 
encryption key to help clarify the difference between the two keys? I realize 
you've already got encrypted and encryption, but that's a subtle difference 
and likely to be lost on an unfamiliar reader.

TestEncryptionZones.java:

I don't see a lot of System.out.printlns in unit tests. I suppose it's because 
it's harder to find the output. Would it be more vogue to use logging?


 Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream
 -

 Key: HDFS-6724
 URL: https://issues.apache.org/jira/browse/HDFS-6724
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Andrew Wang
 Attachments: hdfs-6724.001.patch


 In DFSClient, we need to decrypt EDEK before creating 
 CryptoInputStream/CryptoOutputStream, currently edek is used directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6650) API to get the root of an encryption zone for a path

2014-07-24 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6650:
---

Issue Type: Bug  (was: Sub-task)
Parent: (was: HDFS-6134)

 API to get the root of an encryption zone for a path
 

 Key: HDFS-6650
 URL: https://issues.apache.org/jira/browse/HDFS-6650
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Andrew Wang
Assignee: Andrew Wang

 It'd be useful to be able to query, given a path within an encryption zone, 
 the root of the encryption zone.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6737) DFSClient should use IV generated based on the configured CipherSuite with codecs used

2014-07-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073813#comment-14073813
 ] 

Andrew Wang commented on HDFS-6737:
---

Hi Uma, good points here. I chatted with [~tucu00] about this, here's how it 
works right now:

- Each encryption zone has an ezKey of some size. generateEncryptedKey 
hardcodes usage of AES/CTR/NoPadding, which means a 16B IV.
- When generating a new encrypted key, it has the same keySize as the ezKey and 
same IV size as the hardcoded AES/CTR/NoPadding
- All AES algorithms uses a 16B IV, so we're find as long as the DEK is always 
AES too (okay limitation)
- We don't foresee switching the hardcoded AES/CTR/NoPadding, so don't need to 
pass a CipherSuite into generate/decryptEncryptedKey
- Enforcing that the ezKey and DEK need to have the same keySize is not great, 
but tucu thinks it's a reasonable limitation. If a user wants to change the 
keysize, they need to make a new EZ with a bigger ezKey and copy everything 
there.
- You can still use whatever AES algorithm you want for the actual data 
encryption, which is what the per-file CipherSuite specifies.

I find this pretty complicated, so is definitely something we need to put in 
the user documentation. createEncryptionZone also seems like it needs a way of 
specifying the key size, but we could do that when we actually support AES-256. 
Do you think we need any other improvements? We could try to improve how things 
are modeled in CipherSuite (since we depend on the block size being 16B), but 
maybe it's okay as is.

 DFSClient should use IV generated based on the configured CipherSuite with 
 codecs used
 --

 Key: HDFS-6737
 URL: https://issues.apache.org/jira/browse/HDFS-6737
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G
 Attachments: HDFS-6737.patch


 Seems like we are using IV as like Encrypted data encryption key iv. But the 
 underlying Codec's cipher suite may expect different iv length. So, we should 
 generate IV from the Coec's cipher suite configured.
 {code}
  final CryptoInputStream cryptoIn =
   new CryptoInputStream(dfsis, CryptoCodec.getInstance(conf, 
   feInfo.getCipherSuite()), 
 feInfo.getEncryptedDataEncryptionKey(),
   feInfo.getIV());
 {code}
 So, instead of using feinfo.getIV(), we should generate like
 {code}
 byte[] iv = new byte[codec.getCipherSuite().getAlgorithmBlockSize()]; 
 codec.generateSecureRandom(iv);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6696) Name node cannot start if the path of a file under construction contains .snapshot

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073814#comment-14073814
 ] 

Hadoop QA commented on HDFS-6696:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657664/hdfs-6696.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7456//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7456//console

This message is automatically generated.

 Name node cannot start if the path of a file under construction contains 
 .snapshot
 

 Key: HDFS-6696
 URL: https://issues.apache.org/jira/browse/HDFS-6696
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Andrew Wang
Priority: Blocker
 Attachments: hdfs-6696.001.patch, hdfs-6696.002.patch, 
 hdfs-6696.003.patch


 Using {{-renameReserved}} to rename .snapshot in a pre-hdfs-snapshot 
 feature fsimage during upgrade only works, if there is nothing under 
 construction under the renamed directory.  I am not sure whether it takes 
 care of edits containing .snapshot properly.
 The workaround is to identify these directories and rename, then do 
 {{saveNamespace}} before performing upgrade.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6696) Name node cannot start if the path of a file under construction contains .snapshot

2014-07-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073846#comment-14073846
 ] 

Andrew Wang commented on HDFS-6696:
---

I ran the failed tests successfully locally, so I think we're good. Thanks for 
reviewing Jing, I'll commit this shortly to all the branches.

 Name node cannot start if the path of a file under construction contains 
 .snapshot
 

 Key: HDFS-6696
 URL: https://issues.apache.org/jira/browse/HDFS-6696
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Andrew Wang
Priority: Blocker
 Attachments: hdfs-6696.001.patch, hdfs-6696.002.patch, 
 hdfs-6696.003.patch


 Using {{-renameReserved}} to rename .snapshot in a pre-hdfs-snapshot 
 feature fsimage during upgrade only works, if there is nothing under 
 construction under the renamed directory.  I am not sure whether it takes 
 care of edits containing .snapshot properly.
 The workaround is to identify these directories and rename, then do 
 {{saveNamespace}} before performing upgrade.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6696) Name node cannot start if the path of a file under construction contains .snapshot

2014-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6696:
--

   Resolution: Fixed
Fix Version/s: 2.5.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2, branch-2.5. Thanks again Kihwal for reporting, 
Jing for reviewing.

I did mess up the trunk commit message a bit, forgot to put the JIRA #.

 Name node cannot start if the path of a file under construction contains 
 .snapshot
 

 Key: HDFS-6696
 URL: https://issues.apache.org/jira/browse/HDFS-6696
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Andrew Wang
Priority: Blocker
 Fix For: 2.5.0

 Attachments: hdfs-6696.001.patch, hdfs-6696.002.patch, 
 hdfs-6696.003.patch


 Using {{-renameReserved}} to rename .snapshot in a pre-hdfs-snapshot 
 feature fsimage during upgrade only works, if there is nothing under 
 construction under the renamed directory.  I am not sure whether it takes 
 care of edits containing .snapshot properly.
 The workaround is to identify these directories and rename, then do 
 {{saveNamespace}} before performing upgrade.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6665) Add tests for XAttrs in combination with viewfs

2014-07-24 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073864#comment-14073864
 ] 

Andrew Wang commented on HDFS-6665:
---

Hi Stephen, thanks for the patch, just nitty comments again:

* Remove the ACL entries on the first namespace probably meant xattr, in both 
files

+1 pending that though, nice work.

 Add tests for XAttrs in combination with viewfs
 ---

 Key: HDFS-6665
 URL: https://issues.apache.org/jira/browse/HDFS-6665
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
 Attachments: HDFS-6665.1.patch


 This is similar to HDFS-5624 (Add tests for ACLs in combination with viewfs)
 We should verify that XAttr operations work properly with viewfs, and that 
 XAttr commands are routed to the correct namenode in a federated deployment.
 Also, we should make sure that the behavior of XAttr commands on internal 
 dirs is consistent with other commands. For example, setPermission will throw 
 the readonly AccessControlException for paths above the root mount entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6665) Add tests for XAttrs in combination with viewfs

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073872#comment-14073872
 ] 

Hadoop QA commented on HDFS-6665:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657676/HDFS-6665.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS
  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7457//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7457//console

This message is automatically generated.

 Add tests for XAttrs in combination with viewfs
 ---

 Key: HDFS-6665
 URL: https://issues.apache.org/jira/browse/HDFS-6665
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
 Attachments: HDFS-6665.1.patch


 This is similar to HDFS-5624 (Add tests for ACLs in combination with viewfs)
 We should verify that XAttr operations work properly with viewfs, and that 
 XAttr commands are routed to the correct namenode in a federated deployment.
 Also, we should make sure that the behavior of XAttr commands on internal 
 dirs is consistent with other commands. For example, setPermission will throw 
 the readonly AccessControlException for paths above the root mount entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6724) Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream

2014-07-24 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6724:
--

Attachment: hdfs-6724.002.patch

New patch attached, still incorporating changes from HADOOP-10891 for now (git 
mirror hasn't updated). Fixed Charles' comments, thanks for reviewing Charles.

 Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream
 -

 Key: HDFS-6724
 URL: https://issues.apache.org/jira/browse/HDFS-6724
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Andrew Wang
 Attachments: hdfs-6724.001.patch, hdfs-6724.002.patch


 In DFSClient, we need to decrypt EDEK before creating 
 CryptoInputStream/CryptoOutputStream, currently edek is used directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6665) Add tests for XAttrs in combination with viewfs

2014-07-24 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HDFS-6665:
--

Attachment: HDFS-6665.2.patch

Thanks for the review and catching that, [~andrew.wang]!

Uploading a new patch to fix the comment.

Looked into test results of the TestBlockTokenWithDFS and 
TestNamenodeCapacityReport, and they're not related to these WebHDFS test 
changes. Re-ran them locally successfully.

 Add tests for XAttrs in combination with viewfs
 ---

 Key: HDFS-6665
 URL: https://issues.apache.org/jira/browse/HDFS-6665
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
 Attachments: HDFS-6665.1.patch, HDFS-6665.2.patch


 This is similar to HDFS-5624 (Add tests for ACLs in combination with viewfs)
 We should verify that XAttr operations work properly with viewfs, and that 
 XAttr commands are routed to the correct namenode in a federated deployment.
 Also, we should make sure that the behavior of XAttr commands on internal 
 dirs is consistent with other commands. For example, setPermission will throw 
 the readonly AccessControlException for paths above the root mount entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6724) Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream

2014-07-24 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073886#comment-14073886
 ] 

Charles Lamb commented on HDFS-6724:


+1. Thanks Andrew.


 Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream
 -

 Key: HDFS-6724
 URL: https://issues.apache.org/jira/browse/HDFS-6724
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Andrew Wang
 Attachments: hdfs-6724.001.patch, hdfs-6724.002.patch


 In DFSClient, we need to decrypt EDEK before creating 
 CryptoInputStream/CryptoOutputStream, currently edek is used directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6724) Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream

2014-07-24 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073891#comment-14073891
 ] 

Yi Liu commented on HDFS-6724:
--

Look good to me, +1, Thanks Andrew.

 Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream
 -

 Key: HDFS-6724
 URL: https://issues.apache.org/jira/browse/HDFS-6724
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Yi Liu
Assignee: Andrew Wang
 Attachments: hdfs-6724.001.patch, hdfs-6724.002.patch


 In DFSClient, we need to decrypt EDEK before creating 
 CryptoInputStream/CryptoOutputStream, currently edek is used directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6750) The DataNode should use its shared memory segment to mark short-circuit replicas that have been unlinked as stale

2014-07-24 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-6750:
--

 Summary: The DataNode should use its shared memory segment to mark 
short-circuit replicas that have been unlinked as stale
 Key: HDFS-6750
 URL: https://issues.apache.org/jira/browse/HDFS-6750
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


The DataNode should mark short-circuit replicas that have been unlinked as 
stale.  This would prevent replicas that had been deleted from lingering in the 
DFSClient cache.  (At least for DFSClients that use shared memory; those 
without shared memory will still have to use the timeout method.)

Note that when a replica is stale, any ongoing reads or mmaps can still 
complete.  But stale replicas will be removed from the DFSClient cache once 
they're no longer in use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6750) The DataNode should use its shared memory segment to mark short-circuit replicas that have been unlinked as stale

2014-07-24 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-6750:
---

Status: Patch Available  (was: Open)

 The DataNode should use its shared memory segment to mark short-circuit 
 replicas that have been unlinked as stale
 -

 Key: HDFS-6750
 URL: https://issues.apache.org/jira/browse/HDFS-6750
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-6750.001.patch


 The DataNode should mark short-circuit replicas that have been unlinked as 
 stale.  This would prevent replicas that had been deleted from lingering in 
 the DFSClient cache.  (At least for DFSClients that use shared memory; those 
 without shared memory will still have to use the timeout method.)
 Note that when a replica is stale, any ongoing reads or mmaps can still 
 complete.  But stale replicas will be removed from the DFSClient cache once 
 they're no longer in use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6750) The DataNode should use its shared memory segment to mark short-circuit replicas that have been unlinked as stale

2014-07-24 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-6750:
---

Attachment: HDFS-6750.001.patch

 The DataNode should use its shared memory segment to mark short-circuit 
 replicas that have been unlinked as stale
 -

 Key: HDFS-6750
 URL: https://issues.apache.org/jira/browse/HDFS-6750
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-6750.001.patch


 The DataNode should mark short-circuit replicas that have been unlinked as 
 stale.  This would prevent replicas that had been deleted from lingering in 
 the DFSClient cache.  (At least for DFSClients that use shared memory; those 
 without shared memory will still have to use the timeout method.)
 Note that when a replica is stale, any ongoing reads or mmaps can still 
 complete.  But stale replicas will be removed from the DFSClient cache once 
 they're no longer in use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6749) FSNamesystem#getXAttrs and listXAttrs should call resolvePath

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073905#comment-14073905
 ] 

Hadoop QA commented on HDFS-6749:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657680/HDFS-6749.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7458//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7458//console

This message is automatically generated.

 FSNamesystem#getXAttrs and listXAttrs should call resolvePath
 -

 Key: HDFS-6749
 URL: https://issues.apache.org/jira/browse/HDFS-6749
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb
 Attachments: HDFS-6749.001.patch


 FSNamesystem#getXAttrs and listXAttrs don't call FSDirectory#resolvePath. 
 They should.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6728) Dynamically add new volumes to DataStorage, formatted if necessary.

2014-07-24 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-6728:


Attachment: HDFS-6728.001.patch

Updated patch to address the failures. 

 Dynamically add new volumes to DataStorage, formatted if necessary.
 ---

 Key: HDFS-6728
 URL: https://issues.apache.org/jira/browse/HDFS-6728
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
  Labels: datanode
 Attachments: HDFS-6728.000.patch, HDFS-6728.000.patch, 
 HDFS-6728.001.patch


 When dynamically adding a volume to {{DataStorage}}, it should prepare the 
 {{data dir}}, e.g., formatting if it is empty.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6747) Display the most recent GC info on NN webUI

2014-07-24 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073916#comment-14073916
 ] 

Allen Wittenauer commented on HDFS-6747:


If people are using the Web UI to solve those problems, that's bad because it 
means that we have an education problem.  Most of the pro Hadoop admins 
(including myself) configure the system such that problems such as these are 
handled proactively via monitoring and metrics collection.  Putting it on the 
NN UI is really just a yup, my phone went off for a valid reason kind of 
thing.

Also, as [~esteban] points out, the *length* of the GC is more important that 
it *happened*. In modern JREs configured with modern GC engines, they do 
collection all the time.  

 

 Display the most recent GC info on NN webUI
 ---

 Key: HDFS-6747
 URL: https://issues.apache.org/jira/browse/HDFS-6747
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 It will be handy if the recent GC information is available on NN webUI. 
 admins don't need to dig out GC logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6722) Display readable last contact time for dead nodes on NN webUI

2014-07-24 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073961#comment-14073961
 ] 

Ming Ma commented on HDFS-6722:
---

My vote is to keep the current patch given each function is used only within 
the scope of each tab. We can refactor these later when necessary given there 
might be more changes coming to the webUI.

 Display readable last contact time for dead nodes on NN webUI
 -

 Key: HDFS-6722
 URL: https://issues.apache.org/jira/browse/HDFS-6722
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Ming Ma
 Attachments: HDFS-6722-2.patch, HDFS-6722.patch


 For dead node info on NN webUI, admins want to know when the nodes became 
 dead, to troubleshoot missing block, etc. Currently the webUI displays the 
 last contact as the unit of seconds since the last contact. It will be 
 useful to display the info in Date format.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6747) Display the most recent GC info on NN webUI

2014-07-24 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073971#comment-14073971
 ] 

Ming Ma commented on HDFS-6747:
---

Completely agree with the monitoring and automation aspect; we actually use 
them a lot. However, certain information such as missing blocks and 
decommission progress is easy to surface via NN webUI or HDFS CLI than metrics 
collection system UI. For the GC data, thanks Esteban, HDFS-6403 should be good 
enough. We can resolve this jira.

 Display the most recent GC info on NN webUI
 ---

 Key: HDFS-6747
 URL: https://issues.apache.org/jira/browse/HDFS-6747
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma

 It will be handy if the recent GC information is available on NN webUI. 
 admins don't need to dig out GC logs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6665) Add tests for XAttrs in combination with viewfs

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073997#comment-14073997
 ] 

Hadoop QA commented on HDFS-6665:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657727/HDFS-6665.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7459//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7459//console

This message is automatically generated.

 Add tests for XAttrs in combination with viewfs
 ---

 Key: HDFS-6665
 URL: https://issues.apache.org/jira/browse/HDFS-6665
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
 Attachments: HDFS-6665.1.patch, HDFS-6665.2.patch


 This is similar to HDFS-5624 (Add tests for ACLs in combination with viewfs)
 We should verify that XAttr operations work properly with viewfs, and that 
 XAttr commands are routed to the correct namenode in a federated deployment.
 Also, we should make sure that the behavior of XAttr commands on internal 
 dirs is consistent with other commands. For example, setPermission will throw 
 the readonly AccessControlException for paths above the root mount entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6750) The DataNode should use its shared memory segment to mark short-circuit replicas that have been unlinked as stale

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074014#comment-14074014
 ] 

Hadoop QA commented on HDFS-6750:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657735/HDFS-6750.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport
  org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7460//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7460//console

This message is automatically generated.

 The DataNode should use its shared memory segment to mark short-circuit 
 replicas that have been unlinked as stale
 -

 Key: HDFS-6750
 URL: https://issues.apache.org/jira/browse/HDFS-6750
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, hdfs-client
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-6750.001.patch


 The DataNode should mark short-circuit replicas that have been unlinked as 
 stale.  This would prevent replicas that had been deleted from lingering in 
 the DFSClient cache.  (At least for DFSClients that use shared memory; those 
 without shared memory will still have to use the timeout method.)
 Note that when a replica is stale, any ongoing reads or mmaps can still 
 complete.  But stale replicas will be removed from the DFSClient cache once 
 they're no longer in use.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6657) Remove link to 'Legacy UI' in trunk's Namenode UI

2014-07-24 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074017#comment-14074017
 ] 

Vinayakumar B commented on HDFS-6657:
-

Thanks [~wheat9] for review and commit.

 Remove link to 'Legacy UI' in trunk's Namenode UI
 -

 Key: HDFS-6657
 URL: https://issues.apache.org/jira/browse/HDFS-6657
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-6657.patch, HDFS-6657.patch


 Link to 'Legacy UI' provided on namenode's UI.
 Since in trunk, all jsp pages are removed, these links will not work. can be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6728) Dynamically add new volumes to DataStorage, formatted if necessary.

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074020#comment-14074020
 ] 

Hadoop QA commented on HDFS-6728:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657738/HDFS-6728.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7461//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7461//console

This message is automatically generated.

 Dynamically add new volumes to DataStorage, formatted if necessary.
 ---

 Key: HDFS-6728
 URL: https://issues.apache.org/jira/browse/HDFS-6728
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 2.4.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
  Labels: datanode
 Attachments: HDFS-6728.000.patch, HDFS-6728.000.patch, 
 HDFS-6728.001.patch


 When dynamically adding a volume to {{DataStorage}}, it should prepare the 
 {{data dir}}, e.g., formatting if it is empty.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6709) Implement off-heap data structures for NameNode and other HDFS memory optimization

2014-07-24 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074027#comment-14074027
 ] 

Kai Zheng commented on HDFS-6709:
-

bq.  Sadly, while investigating off heap performance last fall, I found this 
article that claims off-heap reads via a DirectByteBuffer have horrible 
performance
I just took a look at the post. Yes it claimed DirectByteBuffer has the same 
great write performance with Unsafe, but the read performance is horrible. Why, 
the reason isn't clear yet. Look at the following code from JRE, there seems to 
be no big difference between read and write in DirectByteBuffer:
{code}
public byte get() {
return ((unsafe.getByte(ix(nextGetIndex();
}
{code}
{code}
public ByteBuffer put(byte x) {
unsafe.putByte(ix(nextPutIndex()), ((x)));
return this;
}
{code}
Questions here: 1) why read performs so bad than write if it's true? 2) Is it 
true that simply adding the index check would cause big performance loss? 
Some tests would be needed to make sure DirectByteBuffer is good enough meeting 
the needs here even in performance consideration, and the performance should be 
compared apple to apple exactly in the cases here.

 Implement off-heap data structures for NameNode and other HDFS memory 
 optimization
 --

 Key: HDFS-6709
 URL: https://issues.apache.org/jira/browse/HDFS-6709
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HDFS-6709.001.patch


 We should investigate implementing off-heap data structures for NameNode and 
 other HDFS memory optimization.  These data structures could reduce latency 
 by avoiding the long GC times that occur with large Java heaps.  We could 
 also avoid per-object memory overheads and control memory layout a little bit 
 better.  This also would allow us to use the JVM's compressed oops 
 optimization even with really large namespaces, if we could get the Java heap 
 below 32 GB for those cases.  This would provide another performance and 
 memory efficiency boost.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6665) Add tests for XAttrs in combination with viewfs

2014-07-24 Thread Stephen Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074032#comment-14074032
 ] 

Stephen Chu commented on HDFS-6665:
---

TestPipelinesFailover failure is not due to the patch changes. I re-ran the 
test locally successfully a couple times to be sure. Other than that, the 
Hadoop QA job looks good.

 Add tests for XAttrs in combination with viewfs
 ---

 Key: HDFS-6665
 URL: https://issues.apache.org/jira/browse/HDFS-6665
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
 Attachments: HDFS-6665.1.patch, HDFS-6665.2.patch


 This is similar to HDFS-5624 (Add tests for ACLs in combination with viewfs)
 We should verify that XAttr operations work properly with viewfs, and that 
 XAttr commands are routed to the correct namenode in a federated deployment.
 Also, we should make sure that the behavior of XAttr commands on internal 
 dirs is consistent with other commands. For example, setPermission will throw 
 the readonly AccessControlException for paths above the root mount entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5919) FileJournalManager doesn't purge empty and corrupt inprogress edits files

2014-07-24 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-5919:


Attachment: HDFS-5919.patch

Updated the Java comment in {{testRetainExtraLogsLimitedSegments}}.

Please review

 FileJournalManager doesn't purge empty and corrupt inprogress edits files
 -

 Key: HDFS-5919
 URL: https://issues.apache.org/jira/browse/HDFS-5919
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-5919.patch, HDFS-5919.patch, HDFS-5919.patch


 FileJournalManager doesn't purge empty and corrupt inprogress edit files.
 These stale files will be accumulated over time.
 These should be cleared along with the purging of other edit logs



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6247) Avoid timeouts for replaceBlock() call by sending intermediate responses to Balancer

2014-07-24 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074047#comment-14074047
 ] 

Vinayakumar B commented on HDFS-6247:
-

Hi [~clamb], Thanks for taking a look at the patch.

bq. I'm curious about why you are using a 5sec heartbeat interval. That seems 
small relative to the timeout on the socket.
 I thought it will be good enough to send the status. Since the total number of 
block movements will be limited in balancing by bandwidth, I felt 5 second 
interval will not going to add too much traffic.
 
How much you want to me increase? 30 sec would be fine?

 Avoid timeouts for replaceBlock() call by sending intermediate responses to 
 Balancer
 

 Key: HDFS-6247
 URL: https://issues.apache.org/jira/browse/HDFS-6247
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer, datanode
Affects Versions: 2.4.0
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-6247.patch, HDFS-6247.patch, HDFS-6247.patch, 
 HDFS-6247.patch


 Currently there is no response sent from target Datanode to Balancer for the 
 replaceBlock() calls.
 Since the Block movement for balancing is throttled, complete block movement 
 will take time and this could result in timeout at Balancer, which will be 
 trying to read the status message.
  
 To Avoid this during replaceBlock() call in in progress Datanode  can send 
 IN_PROGRESS status messages to Balancer to avoid timeouts and treat 
 BlockMovement as  failed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-573) Porting libhdfs to Windows

2014-07-24 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HDFS-573:
--

Assignee: Chris Nauroth

 Porting libhdfs to Windows
 --

 Key: HDFS-573
 URL: https://issues.apache.org/jira/browse/HDFS-573
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
 Environment: Windows, Visual Studio 2008
Reporter: Ziliang Guo
Assignee: Chris Nauroth
   Original Estimate: 336h
  Remaining Estimate: 336h

 The current C code in libhdfs is written using C99 conventions and also uses 
 a few POSIX specific functions such as hcreate, hsearch, and pthread mutex 
 locks.  To compile it using Visual Studio would require a conversion of the 
 code in hdfsJniHelper.c and hdfs.c to C89 and replacement/reimplementation of 
 the POSIX functions.  The code also uses the stdint.h header, which is not 
 part of the original C89, but there exists what appears to be a BSD licensed 
 reimplementation written to be compatible with MSVC floating around.  I have 
 already done the other necessary conversions, as well as created a simplistic 
 hash bucket for use with hcreate and hsearch and successfully built a DLL of 
 libhdfs.  Further testing is needed to see if it is usable by other programs 
 to actually access hdfs, which will likely happen in the next few weeks as 
 the Condor Project continues with its file transfer work.
 In the process, I've removed a few what I believe are extraneous consts and 
 also fixed an incorrect array initialization where someone was attempting to 
 initialize with something like this: JavaVMOption options[noArgs]; where 
 noArgs was being incremented in the code above.  This was in the 
 hdfsJniHelper.c file, in the getJNIEnv function.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6749) FSNamesystem#getXAttrs and listXAttrs should call resolvePath

2014-07-24 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074050#comment-14074050
 ] 

Chris Nauroth commented on HDFS-6749:
-

The patch looks good.  I suppose the way to test this would be to call these 
APIs using /.reserved/.inodes/inode ID as the input path.  I'd expect the 
tests to fail before your patch, but pass after your patch.

 FSNamesystem#getXAttrs and listXAttrs should call resolvePath
 -

 Key: HDFS-6749
 URL: https://issues.apache.org/jira/browse/HDFS-6749
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Charles Lamb
Assignee: Charles Lamb
 Attachments: HDFS-6749.001.patch


 FSNamesystem#getXAttrs and listXAttrs don't call FSDirectory#resolvePath. 
 They should.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6665) Add tests for XAttrs in combination with viewfs

2014-07-24 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074052#comment-14074052
 ] 

Vinayakumar B commented on HDFS-6665:
-

TestPipelinesFailover failure might be related to HDFS-6694

 Add tests for XAttrs in combination with viewfs
 ---

 Key: HDFS-6665
 URL: https://issues.apache.org/jira/browse/HDFS-6665
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
 Attachments: HDFS-6665.1.patch, HDFS-6665.2.patch


 This is similar to HDFS-5624 (Add tests for ACLs in combination with viewfs)
 We should verify that XAttr operations work properly with viewfs, and that 
 XAttr commands are routed to the correct namenode in a federated deployment.
 Also, we should make sure that the behavior of XAttr commands on internal 
 dirs is consistent with other commands. For example, setPermission will throw 
 the readonly AccessControlException for paths above the root mount entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6665) Add tests for XAttrs in combination with viewfs

2014-07-24 Thread Stephen Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074059#comment-14074059
 ] 

Stephen Chu commented on HDFS-6665:
---

Ah, yes, seems to be the same too many open files issue talked about in that 
JIRA. Thanks for pointing me to it, Vinay.

 Add tests for XAttrs in combination with viewfs
 ---

 Key: HDFS-6665
 URL: https://issues.apache.org/jira/browse/HDFS-6665
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs-client
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
 Attachments: HDFS-6665.1.patch, HDFS-6665.2.patch


 This is similar to HDFS-5624 (Add tests for ACLs in combination with viewfs)
 We should verify that XAttr operations work properly with viewfs, and that 
 XAttr commands are routed to the correct namenode in a federated deployment.
 Also, we should make sure that the behavior of XAttr commands on internal 
 dirs is consistent with other commands. For example, setPermission will throw 
 the readonly AccessControlException for paths above the root mount entry.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6739) Add getDatanodeStorageReport to ClientProtocol

2014-07-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074066#comment-14074066
 ] 

Hadoop QA commented on HDFS-6739:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657433/h6739_20140724.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7462//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7462//console

This message is automatically generated.

 Add getDatanodeStorageReport to ClientProtocol
 --

 Key: HDFS-6739
 URL: https://issues.apache.org/jira/browse/HDFS-6739
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: h6739_20140724.patch


 ClientProtocol has a getDatanodeReport(..) methods for retrieving datanode 
 report from namenode.  However, there is no way to get datanode storage 
 report.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5919) FileJournalManager doesn't purge empty and corrupt inprogress edits files

2014-07-24 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14074090#comment-14074090
 ] 

Jing Zhao commented on HDFS-5919:
-

+1. Thanks Vinay!

 FileJournalManager doesn't purge empty and corrupt inprogress edits files
 -

 Key: HDFS-5919
 URL: https://issues.apache.org/jira/browse/HDFS-5919
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-5919.patch, HDFS-5919.patch, HDFS-5919.patch


 FileJournalManager doesn't purge empty and corrupt inprogress edit files.
 These stale files will be accumulated over time.
 These should be cleared along with the purging of other edit logs



--
This message was sent by Atlassian JIRA
(v6.2#6252)