[jira] [Created] (HDFS-7010) boot up libhdfs3 project

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7010:
--

 Summary: boot up libhdfs3 project
 Key: HDFS-7010
 URL: https://issues.apache.org/jira/browse/HDFS-7010
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


boot up libhdfs3 project with CMake, Readme and license file.
Integrate google mock and google test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7010) boot up libhdfs3 project

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7010:
---
Attachment: HDFS-7010.patch

 boot up libhdfs3 project
 

 Key: HDFS-7010
 URL: https://issues.apache.org/jira/browse/HDFS-7010
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7010.patch


 boot up libhdfs3 project with CMake, Readme and license file.
 Integrate google mock and google test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7011) Implement basic utilities for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7011:
--

 Summary: Implement basic utilities for libhdfs3
 Key: HDFS-7011
 URL: https://issues.apache.org/jira/browse/HDFS-7011
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7011) Implement basic utilities for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7011:
---
Description: Implement basic utilities such as hash, exception handling, 
logger, configure parser, checksum calculate and so on.

 Implement basic utilities for libhdfs3
 --

 Key: HDFS-7011
 URL: https://issues.apache.org/jira/browse/HDFS-7011
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang

 Implement basic utilities such as hash, exception handling, logger, configure 
 parser, checksum calculate and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6951) Saving namespace and restarting NameNode will remove existing encryption zones

2014-09-06 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6951:
---
Attachment: HDFS-6951.006.patch

Rebased again to account for HDFS-6986 checkin.

 Saving namespace and restarting NameNode will remove existing encryption zones
 --

 Key: HDFS-6951
 URL: https://issues.apache.org/jira/browse/HDFS-6951
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: encryption
Affects Versions: 3.0.0
Reporter: Stephen Chu
Assignee: Charles Lamb
 Attachments: HDFS-6951-prelim.002.patch, HDFS-6951-testrepo.patch, 
 HDFS-6951.001.patch, HDFS-6951.002.patch, HDFS-6951.003.patch, 
 HDFS-6951.004.patch, HDFS-6951.005.patch, HDFS-6951.006.patch, editsStored


 Currently, when users save namespace and restart the NameNode, pre-existing 
 encryption zones will be wiped out.
 I could reproduce this on a pseudo-distributed cluster:
 * Create an encryption zone
 * List encryption zones and verify the newly created zone is present
 * Save the namespace
 * Kill and restart the NameNode
 * List the encryption zones and you'll find the encryption zone is missing
 I've attached a test case for {{TestEncryptionZones}} that reproduces this as 
 well. Removing the saveNamespace call will get the test to pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7011) Implement basic utilities for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7011:
---
Attachment: HDFS-7011.patch

 Implement basic utilities for libhdfs3
 --

 Key: HDFS-7011
 URL: https://issues.apache.org/jira/browse/HDFS-7011
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7011.patch


 Implement basic utilities such as hash, exception handling, logger, configure 
 parser, checksum calculate and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6979) hdfs.dll does not produce .pdb files

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124431#comment-14124431
 ] 

Hudson commented on HDFS-6979:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #672 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/672/])
HDFS-6979. hdfs.dll not produce .pdb files. Contributed by Chris Nauroth. 
(cnauroth: rev fab9bc58ec03ea81cd5ce8a8746a4ee588f7bb08)
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HDFS-6979. Fix minor error in CHANGES.txt. Contributed by Chris Nauroth. 
(cnauroth: rev b051327ab6a01774e1dad59e1e547dd16f603789)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 hdfs.dll does not produce .pdb files
 

 Key: HDFS-6979
 URL: https://issues.apache.org/jira/browse/HDFS-6979
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Remus Rusanu
Assignee: Chris Nauroth
Priority: Minor
  Labels: build, cmake, native, windows
 Fix For: 3.0.0, 2.6.0

 Attachments: HDFS-6979.1.patch


 hdfs.dll build does not produce a retail pdb. For comparison we do produce 
 pdbs for winutils.exe and hadoop.dll.
 I did not verify whether cmake project does not produce a dll with embeded 
 pdb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6831) Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124430#comment-14124430
 ] 

Hudson commented on HDFS-6831:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #672 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/672/])
HDFS-6831. Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'. 
(Contributed by Xiaoyu Yao) (arp: rev 9e941d9f99168cae01f8d50622a616fc26c196d9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestTools.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java


 Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'
 ---

 Key: HDFS-6831
 URL: https://issues.apache.org/jira/browse/HDFS-6831
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Xiaoyu Yao
Priority: Minor
  Labels: newbie
 Fix For: 2.6.0

 Attachments: HDFS-6831.0.patch, HDFS-6831.1.patch, HDFS-6831.2.patch, 
 HDFS-6831.3.patch, HDFS-6831.4.patch


 There is an inconsistency between the console outputs of 'hdfs dfsadmin' 
 command and 'hdfs dfsadmin -help' command.
 {code}
 [root@trunk ~]# hdfs dfsadmin
 Usage: java DFSAdmin
 Note: Administrative commands can only be run as the HDFS superuser.
[-report]
[-safemode enter | leave | get | wait]
[-allowSnapshot snapshotDir]
[-disallowSnapshot snapshotDir]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-finalizeUpgrade]
[-rollingUpgrade [query|prepare|finalize]]
[-metasave filename]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh]
[-printTopology]
[-refreshNamenodes datanodehost:port]
[-deleteBlockPool datanode-host:port blockpoolId [force]]
[-setQuota quota dirname...dirname]
[-clrQuota dirname...dirname]
[-setSpaceQuota quota dirname...dirname]
[-clrSpaceQuota dirname...dirname]
[-setBalancerBandwidth bandwidth in bytes per second]
[-fetchImage local directory]
[-shutdownDatanode datanode_host:ipc_port [upgrade]]
[-getDatanodeInfo datanode_host:ipc_port]
[-help [cmd]]
 {code}
 {code}
 [root@trunk ~]# hdfs dfsadmin -help
 hadoop dfsadmin performs DFS administrative commands.
 The full syntax is: 
 hadoop dfsadmin
   [-report [-live] [-dead] [-decommissioning]]
   [-safemode enter | leave | get | wait]
   [-saveNamespace]
   [-rollEdits]
   [-restoreFailedStorage true|false|check]
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
   [-setSpaceQuota quota dirname...dirname]
   [-clrSpaceQuota dirname...dirname]
   [-finalizeUpgrade]
   [-rollingUpgrade [query|prepare|finalize]]
   [-refreshServiceAcl]
   [-refreshUserToGroupsMappings]
   [-refreshSuperUserGroupsConfiguration]
   [-refreshCallQueue]
   [-refresh host:ipc_port key [arg1..argn]
   [-printTopology]
   [-refreshNamenodes datanodehost:port]
   [-deleteBlockPool datanodehost:port blockpoolId [force]]
   [-setBalancerBandwidth bandwidth]
   [-fetchImage local directory]
   [-allowSnapshot snapshotDir]
   [-disallowSnapshot snapshotDir]
   [-shutdownDatanode datanode_host:ipc_port [upgrade]]
   [-getDatanodeInfo datanode_host:ipc_port
   [-help [cmd]
 {code}
 These two outputs should be the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124435#comment-14124435
 ] 

Hudson commented on HDFS-6376:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #672 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/672/])
HDFS-6376. Distcp data between two HA clusters requires another configuration. 
Contributed by Dave Marion and Haohui Mai. (jing: rev 
c6107f566ff01e9bfee9052f86f6e5b21d5e89f3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockPoolManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetConf.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Distcp data between two HA clusters requires another configuration
 --

 Key: HDFS-6376
 URL: https://issues.apache.org/jira/browse/HDFS-6376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, federation, hdfs-client
Affects Versions: 2.2.0, 2.3.0, 2.4.0
 Environment: Hadoop 2.3.0
Reporter: Dave Marion
Assignee: Dave Marion
 Fix For: 3.0.0, 2.6.0

 Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
 HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
 HDFS-6376-6-trunk.patch, HDFS-6376-7-trunk.patch, HDFS-6376-branch-2.4.patch, 
 HDFS-6376-patch-1.patch, HDFS-6376.000.patch, HDFS-6376.008.patch, 
 HDFS-6376.009.patch, HDFS-6376.010.patch, HDFS-6376.011.patch


 User has to create a third set of configuration files for distcp when 
 transferring data between two HA clusters.
 Consider the scenario in [1]. You cannot put all of the required properties 
 in core-site.xml and hdfs-site.xml for the client to resolve the location of 
 both active namenodes. If you do, then the datanodes from cluster A may join 
 cluster B. I can not find a configuration option that tells the datanodes to 
 federate blocks for only one of the clusters in the configuration.
 [1] 
 http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6986) DistributedFileSystem must get delegation tokens from configured KeyProvider

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124432#comment-14124432
 ] 

Hudson commented on HDFS-6986:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #672 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/672/])
HDFS-6986. DistributedFileSystem must get delegation tokens from configured 
KeyProvider. (zhz via tucu) (tucu: rev 3b35f81603bbfae119762b50bcb46de70a421368)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


 DistributedFileSystem must get delegation tokens from configured KeyProvider
 

 Key: HDFS-6986
 URL: https://issues.apache.org/jira/browse/HDFS-6986
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Reporter: Alejandro Abdelnur
Assignee: Zhe Zhang
 Fix For: 2.6.0

 Attachments: HDFS-6986-20140905-v2.patch, 
 HDFS-6986-20140905-v3.patch, HDFS-6986-20140905.patch, HDFS-6986.patch


 {{KeyProvider}} via {{KeyProviderDelegationTokenExtension}} provides 
 delegation tokens. {{DistributedFileSystem}} should augment the HDFS 
 delegation tokens with the keyprovider ones so tasks can interact with 
 keyprovider when it is a client/server impl (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6862) Add missing timeout annotations to tests

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124442#comment-14124442
 ] 

Hudson commented on HDFS-6862:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #672 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/672/])
HDFS-6862. Add missing timeout annotations to tests. (Contributed by Xiaoyu 
Yao) (arp: rev 9609b7303a98c8eff676c5a086b08b1ca9ab777c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestValidateConfigurationSettings.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAStateTransitions.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSServerPorts.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyCheckpoints.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java


 Add missing timeout annotations to tests
 

 Key: HDFS-6862
 URL: https://issues.apache.org/jira/browse/HDFS-6862
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
  Labels: newbie
 Fix For: 2.6.0

 Attachments: HDFS-6862.0.patch


 One or more tests in the following classes are missing timeout annotations.
 # org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
 # org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
 # org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
 # org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
 # org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
 # org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
 # org.apache.hadoop.hdfs.TestHDFSServerPorts



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7012) Implement a TCP client for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7012:
--

 Summary: Implement a TCP client for libhdfs3
 Key: HDFS-7012
 URL: https://issues.apache.org/jira/browse/HDFS-7012
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


Implement a module for libhdfs3 to provide basic function of TCP client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7012) Implement a TCP client for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7012:
---
Attachment: HDFS-7012.patch

 Implement a TCP client for libhdfs3
 ---

 Key: HDFS-7012
 URL: https://issues.apache.org/jira/browse/HDFS-7012
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7012.patch


 Implement a module for libhdfs3 to provide basic function of TCP client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6951) Saving namespace and restarting NameNode will remove existing encryption zones

2014-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124443#comment-14124443
 ] 

Hadoop QA commented on HDFS-6951:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12667002/HDFS-6951.006.patch
  against trunk revision 3b35f81.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7929//console

This message is automatically generated.

 Saving namespace and restarting NameNode will remove existing encryption zones
 --

 Key: HDFS-6951
 URL: https://issues.apache.org/jira/browse/HDFS-6951
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: encryption
Affects Versions: 3.0.0
Reporter: Stephen Chu
Assignee: Charles Lamb
 Attachments: HDFS-6951-prelim.002.patch, HDFS-6951-testrepo.patch, 
 HDFS-6951.001.patch, HDFS-6951.002.patch, HDFS-6951.003.patch, 
 HDFS-6951.004.patch, HDFS-6951.005.patch, HDFS-6951.006.patch, editsStored


 Currently, when users save namespace and restart the NameNode, pre-existing 
 encryption zones will be wiped out.
 I could reproduce this on a pseudo-distributed cluster:
 * Create an encryption zone
 * List encryption zones and verify the newly created zone is present
 * Save the namespace
 * Kill and restart the NameNode
 * List the encryption zones and you'll find the encryption zone is missing
 I've attached a test case for {{TestEncryptionZones}} that reproduces this as 
 well. Removing the saveNamespace call will get the test to pass.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7013) Implement RPC framework version 9 for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7013:
--

 Summary: Implement RPC framework version 9 for libhdfs3
 Key: HDFS-7013
 URL: https://issues.apache.org/jira/browse/HDFS-7013
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


Implement the Hadoop RPC framework version 9 for libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7013) Implement RPC framework version 9 for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7013:
---
Attachment: HDFS-7013.patch

 Implement RPC framework version 9 for libhdfs3
 --

 Key: HDFS-7013
 URL: https://issues.apache.org/jira/browse/HDFS-7013
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7013.patch


 Implement the Hadoop RPC framework version 9 for libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7014) Implement Client - Namenode/Datanode protocol for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7014:
--

 Summary: Implement Client - Namenode/Datanode protocol for libhdfs3
 Key: HDFS-7014
 URL: https://issues.apache.org/jira/browse/HDFS-7014
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


Implement Client - Namenode RPC protocol and support Namenode HA.
Implement Client - Datanode RPC protocol.
Implement some basic server side class such as ExtendedBlock and LocatedBlock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7014) Implement Client - Namenode/Datanode protocol for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7014:
---
Attachment: HDFS-7014.patch

 Implement Client - Namenode/Datanode protocol for libhdfs3
 --

 Key: HDFS-7014
 URL: https://issues.apache.org/jira/browse/HDFS-7014
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7014.patch


 Implement Client - Namenode RPC protocol and support Namenode HA.
 Implement Client - Datanode RPC protocol.
 Implement some basic server side class such as ExtendedBlock and LocatedBlock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7015) Implement C++ interface for file system

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7015:
--

 Summary: Implement C++ interface for file system
 Key: HDFS-7015
 URL: https://issues.apache.org/jira/browse/HDFS-7015
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


Implement the C++ interface for file system functions such as connect, 
deletePath and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7015) Implement C++ interface for file system

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7015:
---
Attachment: HDFS-7015.patch

 Implement C++ interface for file system
 ---

 Key: HDFS-7015
 URL: https://issues.apache.org/jira/browse/HDFS-7015
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7015.patch


 Implement the C++ interface for file system functions such as connect, 
 deletePath and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7016) Implement DataTransferProtocol and Inputstream

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7016:
--

 Summary: Implement DataTransferProtocol and Inputstream
 Key: HDFS-7016
 URL: https://issues.apache.org/jira/browse/HDFS-7016
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


Implement DataTransferProtocol
Implement RemoteBlockReader
Implement LocalBlockReader (based on client-datanode RPC call, and cannot work 
with secure enabled HDFS)
Implement InputStream C++ interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7016) Implement DataTransferProtocol and Inputstream

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7016:
---
Attachment: HDFS-7016.patch

 Implement DataTransferProtocol and Inputstream
 --

 Key: HDFS-7016
 URL: https://issues.apache.org/jira/browse/HDFS-7016
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7016.patch


 Implement DataTransferProtocol
 Implement RemoteBlockReader
 Implement LocalBlockReader (based on client-datanode RPC call, and cannot 
 work with secure enabled HDFS)
 Implement InputStream C++ interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7017) Implement OutputStream for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7017:
--

 Summary: Implement OutputStream for libhdfs3
 Key: HDFS-7017
 URL: https://issues.apache.org/jira/browse/HDFS-7017
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


Implement pipeline and OutputStream C++ interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7016) Implement DataTransferProtocol and Inputstream for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7016:
---
Summary: Implement DataTransferProtocol and Inputstream for libhdfs3  (was: 
Implement DataTransferProtocol and Inputstream)

 Implement DataTransferProtocol and Inputstream for libhdfs3
 ---

 Key: HDFS-7016
 URL: https://issues.apache.org/jira/browse/HDFS-7016
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7016.patch


 Implement DataTransferProtocol
 Implement RemoteBlockReader
 Implement LocalBlockReader (based on client-datanode RPC call, and cannot 
 work with secure enabled HDFS)
 Implement InputStream C++ interface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7015) Implement C++ interface for file system for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7015:
---
Summary: Implement C++ interface for file system for libhdfs3  (was: 
Implement C++ interface for file system)

 Implement C++ interface for file system for libhdfs3
 

 Key: HDFS-7015
 URL: https://issues.apache.org/jira/browse/HDFS-7015
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7015.patch


 Implement the C++ interface for file system functions such as connect, 
 deletePath and so on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7018) Implement C interface for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7018:
--

 Summary: Implement C interface for libhdfs3
 Key: HDFS-7018
 URL: https://issues.apache.org/jira/browse/HDFS-7018
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


Implement C interface for libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7018) Implement C interface for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7018:
---
Attachment: HDFS-7018.patch

 Implement C interface for libhdfs3
 --

 Key: HDFS-7018
 URL: https://issues.apache.org/jira/browse/HDFS-7018
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7018.patch


 Implement C interface for libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7019) Add unit test for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7019:
--

 Summary: Add unit test for libhdfs3
 Key: HDFS-7019
 URL: https://issues.apache.org/jira/browse/HDFS-7019
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


Add unit test for libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7019) Add unit test for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7019:
---
Attachment: HDFS-7019.patch

 Add unit test for libhdfs3
 --

 Key: HDFS-7019
 URL: https://issues.apache.org/jira/browse/HDFS-7019
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7019.patch


 Add unit test for libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7020) Add function test for C interface, filesystem InputStream and OutputStream for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7020:
---
Summary: Add function test for C interface, filesystem InputStream and 
OutputStream for libhdfs3  (was: Add function for C interface, filesystem 
InputStream and OutputStream for libhdfs3)

 Add function test for C interface, filesystem InputStream and OutputStream 
 for libhdfs3
 ---

 Key: HDFS-7020
 URL: https://issues.apache.org/jira/browse/HDFS-7020
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7020) Add function for C interface, filesystem, InputStream and OutputStream for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7020:
---
Summary: Add function for C interface, filesystem, InputStream and 
OutputStream for libhdfs3  (was: Add function test for C interface, filesystem 
InputStream and OutputStream for libhdfs3)

 Add function for C interface, filesystem, InputStream and OutputStream for 
 libhdfs3
 ---

 Key: HDFS-7020
 URL: https://issues.apache.org/jira/browse/HDFS-7020
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7020) Add function for C interface, filesystem InputStream and OutputStream for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7020:
--

 Summary: Add function for C interface, filesystem InputStream and 
OutputStream for libhdfs3
 Key: HDFS-7020
 URL: https://issues.apache.org/jira/browse/HDFS-7020
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7020) Add function for C interface, filesystem, InputStream and OutputStream for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7020:
---
Attachment: HDFS-7020.patch

 Add function for C interface, filesystem, InputStream and OutputStream for 
 libhdfs3
 ---

 Key: HDFS-7020
 URL: https://issues.apache.org/jira/browse/HDFS-7020
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7020.patch


 Add function for C interface, filesystem, InputStream and OutputStream



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7020) Add function for C interface, filesystem, InputStream and OutputStream for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7020:
---
Description: Add function for C interface, filesystem, InputStream and 
OutputStream

 Add function for C interface, filesystem, InputStream and OutputStream for 
 libhdfs3
 ---

 Key: HDFS-7020
 URL: https://issues.apache.org/jira/browse/HDFS-7020
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7020.patch


 Add function for C interface, filesystem, InputStream and OutputStream



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7021) Add function test for secure enabled HDFS for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7021:
--

 Summary: Add function test for secure enabled HDFS for libhdfs3
 Key: HDFS-7021
 URL: https://issues.apache.org/jira/browse/HDFS-7021
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


Test Kerberos authentication, delegation token authentication and block token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7021) Add function test for secure enabled HDFS for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7021:
---
Attachment: HDFS-7021.patch

 Add function test for secure enabled HDFS for libhdfs3
 --

 Key: HDFS-7021
 URL: https://issues.apache.org/jira/browse/HDFS-7021
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7021.patch


 Test Kerberos authentication, delegation token authentication and block token.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7020) Add function test for C interface, filesystem, InputStream and OutputStream for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7020:
---
Summary: Add function test for C interface, filesystem, InputStream and 
OutputStream for libhdfs3  (was: Add function for C interface, filesystem, 
InputStream and OutputStream for libhdfs3)

 Add function test for C interface, filesystem, InputStream and OutputStream 
 for libhdfs3
 

 Key: HDFS-7020
 URL: https://issues.apache.org/jira/browse/HDFS-7020
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7020.patch


 Add function for C interface, filesystem, InputStream and OutputStream



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7020) Add function test for C interface, filesystem, InputStream and OutputStream for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7020:
---
Description: Add function test for C interface, filesystem, InputStream and 
OutputStream  (was: Add function for C interface, filesystem, InputStream and 
OutputStream)

 Add function test for C interface, filesystem, InputStream and OutputStream 
 for libhdfs3
 

 Key: HDFS-7020
 URL: https://issues.apache.org/jira/browse/HDFS-7020
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7020.patch


 Add function test for C interface, filesystem, InputStream and OutputStream



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7022) Remove usage of boost::atomic in libhdfs3 to use old version of boost

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7022:
--

 Summary: Remove usage of boost::atomic in libhdfs3 to use old 
version of boost
 Key: HDFS-7022
 URL: https://issues.apache.org/jira/browse/HDFS-7022
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7022) Remove usage of boost::atomic in libhdfs3 to use old version of boost

2014-09-06 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7022:
---
Description: 
Libhdfs3 use boost to provide some basic functions in C++ if it is build with 
old C++ compiler.

But boost::atomic is added in boost version 1.53 and it may bring the trouble 
to  the application which already use the old version boost to use libhdfs3.

Removing the usage of boost::atomic can make libhdfs3 to compile on older 
version boost.

 Remove usage of boost::atomic in libhdfs3 to use old version of boost
 -

 Key: HDFS-7022
 URL: https://issues.apache.org/jira/browse/HDFS-7022
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang

 Libhdfs3 use boost to provide some basic functions in C++ if it is build with 
 old C++ compiler.
 But boost::atomic is added in boost version 1.53 and it may bring the trouble 
 to  the application which already use the old version boost to use libhdfs3.
 Removing the usage of boost::atomic can make libhdfs3 to compile on older 
 version boost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7023) use libexpat instead of libxml2 for libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7023:
--

 Summary: use libexpat instead of libxml2 for libhdfs3
 Key: HDFS-7023
 URL: https://issues.apache.org/jira/browse/HDFS-7023
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


As commented in HDFS-6994, libxml2 may has some thread safe issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7024) Use protobuf files in Hadoop source tree instead of the copy in libhdfs3

2014-09-06 Thread Zhanwei Wang (JIRA)
Zhanwei Wang created HDFS-7024:
--

 Summary: Use protobuf files in Hadoop source tree instead of the 
copy in libhdfs3
 Key: HDFS-7024
 URL: https://issues.apache.org/jira/browse/HDFS-7024
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Zhanwei Wang


Currently, to make libhdfs3 build out of the Hadoop source tree, I copy some 
protobuf files into libhdfs3. After merging libhdfs3 into Hadoop, use the 
protobuf files in Hadoop source tree instead of the copy in libhdfs3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2014-09-06 Thread Zhanwei Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124462#comment-14124462
 ] 

Zhanwei Wang commented on HDFS-6994:


Hi [~wheat9]

I have separated libhdfs3 source code into 12 subtasks and create 3 others to 
fix the issue pointed by [~cmccabe].

It is difficult to divide libhdfs3 source code into several parts very clean, 
so some tasks may refer the source code in other subtasks. But anyway, it is 
much easier to review.

Would you please review the code and give some comments? Thanks in advance.

 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: Task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6831) Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124470#comment-14124470
 ] 

Hudson commented on HDFS-6831:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1888 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1888/])
HDFS-6831. Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'. 
(Contributed by Xiaoyu Yao) (arp: rev 9e941d9f99168cae01f8d50622a616fc26c196d9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestTools.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'
 ---

 Key: HDFS-6831
 URL: https://issues.apache.org/jira/browse/HDFS-6831
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Xiaoyu Yao
Priority: Minor
  Labels: newbie
 Fix For: 2.6.0

 Attachments: HDFS-6831.0.patch, HDFS-6831.1.patch, HDFS-6831.2.patch, 
 HDFS-6831.3.patch, HDFS-6831.4.patch


 There is an inconsistency between the console outputs of 'hdfs dfsadmin' 
 command and 'hdfs dfsadmin -help' command.
 {code}
 [root@trunk ~]# hdfs dfsadmin
 Usage: java DFSAdmin
 Note: Administrative commands can only be run as the HDFS superuser.
[-report]
[-safemode enter | leave | get | wait]
[-allowSnapshot snapshotDir]
[-disallowSnapshot snapshotDir]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-finalizeUpgrade]
[-rollingUpgrade [query|prepare|finalize]]
[-metasave filename]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh]
[-printTopology]
[-refreshNamenodes datanodehost:port]
[-deleteBlockPool datanode-host:port blockpoolId [force]]
[-setQuota quota dirname...dirname]
[-clrQuota dirname...dirname]
[-setSpaceQuota quota dirname...dirname]
[-clrSpaceQuota dirname...dirname]
[-setBalancerBandwidth bandwidth in bytes per second]
[-fetchImage local directory]
[-shutdownDatanode datanode_host:ipc_port [upgrade]]
[-getDatanodeInfo datanode_host:ipc_port]
[-help [cmd]]
 {code}
 {code}
 [root@trunk ~]# hdfs dfsadmin -help
 hadoop dfsadmin performs DFS administrative commands.
 The full syntax is: 
 hadoop dfsadmin
   [-report [-live] [-dead] [-decommissioning]]
   [-safemode enter | leave | get | wait]
   [-saveNamespace]
   [-rollEdits]
   [-restoreFailedStorage true|false|check]
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
   [-setSpaceQuota quota dirname...dirname]
   [-clrSpaceQuota dirname...dirname]
   [-finalizeUpgrade]
   [-rollingUpgrade [query|prepare|finalize]]
   [-refreshServiceAcl]
   [-refreshUserToGroupsMappings]
   [-refreshSuperUserGroupsConfiguration]
   [-refreshCallQueue]
   [-refresh host:ipc_port key [arg1..argn]
   [-printTopology]
   [-refreshNamenodes datanodehost:port]
   [-deleteBlockPool datanodehost:port blockpoolId [force]]
   [-setBalancerBandwidth bandwidth]
   [-fetchImage local directory]
   [-allowSnapshot snapshotDir]
   [-disallowSnapshot snapshotDir]
   [-shutdownDatanode datanode_host:ipc_port [upgrade]]
   [-getDatanodeInfo datanode_host:ipc_port
   [-help [cmd]
 {code}
 These two outputs should be the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6979) hdfs.dll does not produce .pdb files

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124471#comment-14124471
 ] 

Hudson commented on HDFS-6979:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1888 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1888/])
HDFS-6979. hdfs.dll not produce .pdb files. Contributed by Chris Nauroth. 
(cnauroth: rev fab9bc58ec03ea81cd5ce8a8746a4ee588f7bb08)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
HDFS-6979. Fix minor error in CHANGES.txt. Contributed by Chris Nauroth. 
(cnauroth: rev b051327ab6a01774e1dad59e1e547dd16f603789)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 hdfs.dll does not produce .pdb files
 

 Key: HDFS-6979
 URL: https://issues.apache.org/jira/browse/HDFS-6979
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Remus Rusanu
Assignee: Chris Nauroth
Priority: Minor
  Labels: build, cmake, native, windows
 Fix For: 3.0.0, 2.6.0

 Attachments: HDFS-6979.1.patch


 hdfs.dll build does not produce a retail pdb. For comparison we do produce 
 pdbs for winutils.exe and hadoop.dll.
 I did not verify whether cmake project does not produce a dll with embeded 
 pdb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124475#comment-14124475
 ] 

Hudson commented on HDFS-6376:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1888 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1888/])
HDFS-6376. Distcp data between two HA clusters requires another configuration. 
Contributed by Dave Marion and Haohui Mai. (jing: rev 
c6107f566ff01e9bfee9052f86f6e5b21d5e89f3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockPoolManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetConf.java


 Distcp data between two HA clusters requires another configuration
 --

 Key: HDFS-6376
 URL: https://issues.apache.org/jira/browse/HDFS-6376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, federation, hdfs-client
Affects Versions: 2.2.0, 2.3.0, 2.4.0
 Environment: Hadoop 2.3.0
Reporter: Dave Marion
Assignee: Dave Marion
 Fix For: 3.0.0, 2.6.0

 Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
 HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
 HDFS-6376-6-trunk.patch, HDFS-6376-7-trunk.patch, HDFS-6376-branch-2.4.patch, 
 HDFS-6376-patch-1.patch, HDFS-6376.000.patch, HDFS-6376.008.patch, 
 HDFS-6376.009.patch, HDFS-6376.010.patch, HDFS-6376.011.patch


 User has to create a third set of configuration files for distcp when 
 transferring data between two HA clusters.
 Consider the scenario in [1]. You cannot put all of the required properties 
 in core-site.xml and hdfs-site.xml for the client to resolve the location of 
 both active namenodes. If you do, then the datanodes from cluster A may join 
 cluster B. I can not find a configuration option that tells the datanodes to 
 federate blocks for only one of the clusters in the configuration.
 [1] 
 http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6986) DistributedFileSystem must get delegation tokens from configured KeyProvider

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124472#comment-14124472
 ] 

Hudson commented on HDFS-6986:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1888 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1888/])
HDFS-6986. DistributedFileSystem must get delegation tokens from configured 
KeyProvider. (zhz via tucu) (tucu: rev 3b35f81603bbfae119762b50bcb46de70a421368)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java


 DistributedFileSystem must get delegation tokens from configured KeyProvider
 

 Key: HDFS-6986
 URL: https://issues.apache.org/jira/browse/HDFS-6986
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Reporter: Alejandro Abdelnur
Assignee: Zhe Zhang
 Fix For: 2.6.0

 Attachments: HDFS-6986-20140905-v2.patch, 
 HDFS-6986-20140905-v3.patch, HDFS-6986-20140905.patch, HDFS-6986.patch


 {{KeyProvider}} via {{KeyProviderDelegationTokenExtension}} provides 
 delegation tokens. {{DistributedFileSystem}} should augment the HDFS 
 delegation tokens with the keyprovider ones so tasks can interact with 
 keyprovider when it is a client/server impl (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6862) Add missing timeout annotations to tests

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124482#comment-14124482
 ] 

Hudson commented on HDFS-6862:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1888 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1888/])
HDFS-6862. Add missing timeout annotations to tests. (Contributed by Xiaoyu 
Yao) (arp: rev 9609b7303a98c8eff676c5a086b08b1ca9ab777c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestValidateConfigurationSettings.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSServerPorts.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyCheckpoints.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAStateTransitions.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java


 Add missing timeout annotations to tests
 

 Key: HDFS-6862
 URL: https://issues.apache.org/jira/browse/HDFS-6862
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
  Labels: newbie
 Fix For: 2.6.0

 Attachments: HDFS-6862.0.patch


 One or more tests in the following classes are missing timeout annotations.
 # org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
 # org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
 # org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
 # org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
 # org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
 # org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
 # org.apache.hadoop.hdfs.TestHDFSServerPorts



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6862) Add missing timeout annotations to tests

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124501#comment-14124501
 ] 

Hudson commented on HDFS-6862:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1863 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1863/])
HDFS-6862. Add missing timeout annotations to tests. (Contributed by Xiaoyu 
Yao) (arp: rev 9609b7303a98c8eff676c5a086b08b1ca9ab777c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyCheckpoints.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAStateTransitions.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHDFSServerPorts.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestHAMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestDelegationTokensWithHA.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestValidateConfigurationSettings.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Add missing timeout annotations to tests
 

 Key: HDFS-6862
 URL: https://issues.apache.org/jira/browse/HDFS-6862
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.5.0
Reporter: Arpit Agarwal
Assignee: Xiaoyu Yao
  Labels: newbie
 Fix For: 2.6.0

 Attachments: HDFS-6862.0.patch


 One or more tests in the following classes are missing timeout annotations.
 # org.apache.hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
 # org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
 # org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
 # org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
 # org.apache.hadoop.hdfs.server.namenode.ha.TestHAMetrics
 # org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
 # org.apache.hadoop.hdfs.TestHDFSServerPorts



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6376) Distcp data between two HA clusters requires another configuration

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124494#comment-14124494
 ] 

Hudson commented on HDFS-6376:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1863 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1863/])
HDFS-6376. Distcp data between two HA clusters requires another configuration. 
Contributed by Dave Marion and Haohui Mai. (jing: rev 
c6107f566ff01e9bfee9052f86f6e5b21d5e89f3)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockPoolManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java


 Distcp data between two HA clusters requires another configuration
 --

 Key: HDFS-6376
 URL: https://issues.apache.org/jira/browse/HDFS-6376
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, federation, hdfs-client
Affects Versions: 2.2.0, 2.3.0, 2.4.0
 Environment: Hadoop 2.3.0
Reporter: Dave Marion
Assignee: Dave Marion
 Fix For: 3.0.0, 2.6.0

 Attachments: HDFS-6376-2.patch, HDFS-6376-3-branch-2.4.patch, 
 HDFS-6376-4-branch-2.4.patch, HDFS-6376-5-trunk.patch, 
 HDFS-6376-6-trunk.patch, HDFS-6376-7-trunk.patch, HDFS-6376-branch-2.4.patch, 
 HDFS-6376-patch-1.patch, HDFS-6376.000.patch, HDFS-6376.008.patch, 
 HDFS-6376.009.patch, HDFS-6376.010.patch, HDFS-6376.011.patch


 User has to create a third set of configuration files for distcp when 
 transferring data between two HA clusters.
 Consider the scenario in [1]. You cannot put all of the required properties 
 in core-site.xml and hdfs-site.xml for the client to resolve the location of 
 both active namenodes. If you do, then the datanodes from cluster A may join 
 cluster B. I can not find a configuration option that tells the datanodes to 
 federate blocks for only one of the clusters in the configuration.
 [1] 
 http://mail-archives.apache.org/mod_mbox/hadoop-user/201404.mbox/%3CBAY172-W2133964E0C283968C161DD1520%40phx.gbl%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6986) DistributedFileSystem must get delegation tokens from configured KeyProvider

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124491#comment-14124491
 ] 

Hudson commented on HDFS-6986:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1863 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1863/])
HDFS-6986. DistributedFileSystem must get delegation tokens from configured 
KeyProvider. (zhz via tucu) (tucu: rev 3b35f81603bbfae119762b50bcb46de70a421368)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java


 DistributedFileSystem must get delegation tokens from configured KeyProvider
 

 Key: HDFS-6986
 URL: https://issues.apache.org/jira/browse/HDFS-6986
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Reporter: Alejandro Abdelnur
Assignee: Zhe Zhang
 Fix For: 2.6.0

 Attachments: HDFS-6986-20140905-v2.patch, 
 HDFS-6986-20140905-v3.patch, HDFS-6986-20140905.patch, HDFS-6986.patch


 {{KeyProvider}} via {{KeyProviderDelegationTokenExtension}} provides 
 delegation tokens. {{DistributedFileSystem}} should augment the HDFS 
 delegation tokens with the keyprovider ones so tasks can interact with 
 keyprovider when it is a client/server impl (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6831) Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124489#comment-14124489
 ] 

Hudson commented on HDFS-6831:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1863 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1863/])
HDFS-6831. Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'. 
(Contributed by Xiaoyu Yao) (arp: rev 9e941d9f99168cae01f8d50622a616fc26c196d9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestTools.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Inconsistency between 'hdfs dfsadmin' and 'hdfs dfsadmin -help'
 ---

 Key: HDFS-6831
 URL: https://issues.apache.org/jira/browse/HDFS-6831
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Akira AJISAKA
Assignee: Xiaoyu Yao
Priority: Minor
  Labels: newbie
 Fix For: 2.6.0

 Attachments: HDFS-6831.0.patch, HDFS-6831.1.patch, HDFS-6831.2.patch, 
 HDFS-6831.3.patch, HDFS-6831.4.patch


 There is an inconsistency between the console outputs of 'hdfs dfsadmin' 
 command and 'hdfs dfsadmin -help' command.
 {code}
 [root@trunk ~]# hdfs dfsadmin
 Usage: java DFSAdmin
 Note: Administrative commands can only be run as the HDFS superuser.
[-report]
[-safemode enter | leave | get | wait]
[-allowSnapshot snapshotDir]
[-disallowSnapshot snapshotDir]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-finalizeUpgrade]
[-rollingUpgrade [query|prepare|finalize]]
[-metasave filename]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh]
[-printTopology]
[-refreshNamenodes datanodehost:port]
[-deleteBlockPool datanode-host:port blockpoolId [force]]
[-setQuota quota dirname...dirname]
[-clrQuota dirname...dirname]
[-setSpaceQuota quota dirname...dirname]
[-clrSpaceQuota dirname...dirname]
[-setBalancerBandwidth bandwidth in bytes per second]
[-fetchImage local directory]
[-shutdownDatanode datanode_host:ipc_port [upgrade]]
[-getDatanodeInfo datanode_host:ipc_port]
[-help [cmd]]
 {code}
 {code}
 [root@trunk ~]# hdfs dfsadmin -help
 hadoop dfsadmin performs DFS administrative commands.
 The full syntax is: 
 hadoop dfsadmin
   [-report [-live] [-dead] [-decommissioning]]
   [-safemode enter | leave | get | wait]
   [-saveNamespace]
   [-rollEdits]
   [-restoreFailedStorage true|false|check]
   [-refreshNodes]
   [-setQuota quota dirname...dirname]
   [-clrQuota dirname...dirname]
   [-setSpaceQuota quota dirname...dirname]
   [-clrSpaceQuota dirname...dirname]
   [-finalizeUpgrade]
   [-rollingUpgrade [query|prepare|finalize]]
   [-refreshServiceAcl]
   [-refreshUserToGroupsMappings]
   [-refreshSuperUserGroupsConfiguration]
   [-refreshCallQueue]
   [-refresh host:ipc_port key [arg1..argn]
   [-printTopology]
   [-refreshNamenodes datanodehost:port]
   [-deleteBlockPool datanodehost:port blockpoolId [force]]
   [-setBalancerBandwidth bandwidth]
   [-fetchImage local directory]
   [-allowSnapshot snapshotDir]
   [-disallowSnapshot snapshotDir]
   [-shutdownDatanode datanode_host:ipc_port [upgrade]]
   [-getDatanodeInfo datanode_host:ipc_port
   [-help [cmd]
 {code}
 These two outputs should be the same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6979) hdfs.dll does not produce .pdb files

2014-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124490#comment-14124490
 ] 

Hudson commented on HDFS-6979:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1863 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1863/])
HDFS-6979. hdfs.dll not produce .pdb files. Contributed by Chris Nauroth. 
(cnauroth: rev fab9bc58ec03ea81cd5ce8a8746a4ee588f7bb08)
* hadoop-hdfs-project/hadoop-hdfs/pom.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
HDFS-6979. Fix minor error in CHANGES.txt. Contributed by Chris Nauroth. 
(cnauroth: rev b051327ab6a01774e1dad59e1e547dd16f603789)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 hdfs.dll does not produce .pdb files
 

 Key: HDFS-6979
 URL: https://issues.apache.org/jira/browse/HDFS-6979
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Remus Rusanu
Assignee: Chris Nauroth
Priority: Minor
  Labels: build, cmake, native, windows
 Fix For: 3.0.0, 2.6.0

 Attachments: HDFS-6979.1.patch


 hdfs.dll build does not produce a retail pdb. For comparison we do produce 
 pdbs for winutils.exe and hadoop.dll.
 I did not verify whether cmake project does not produce a dll with embeded 
 pdb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6776) distcp from insecure cluster (source) to secure cluster (destination) doesn't work via webhdfs

2014-09-06 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124537#comment-14124537
 ] 

Yongjun Zhang commented on HDFS-6776:
-

Hello [~wheat9],

Accessing insecure cluster via webhdfs from secure cluster is broken today, I 
claimed this should not be the right webhdfs contract (instead a bug) in my 
earlier comments. This problem is not just about distcp, but other applications 
too. Fixing webhdfs itself has the advantage that we don't have to fix all 
applications. I argued about this along the way, and would like to emphasize. 

I'd like to give an example here: if you issue hadoop fs -lsr 
webhdfs://insecurecluster from secure cluster side, you would see it fail 
the same way.  Fixing distcp as you proposed would not solve this, but my 
proposed solution does.

Thanks.


 distcp from insecure cluster (source) to secure cluster (destination) doesn't 
 work via webhdfs
 --

 Key: HDFS-6776
 URL: https://issues.apache.org/jira/browse/HDFS-6776
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.3.0, 2.5.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6776.001.patch, HDFS-6776.002.patch, 
 HDFS-6776.003.patch, HDFS-6776.004.patch, HDFS-6776.004.patch, 
 HDFS-6776.005.patch, HDFS-6776.006.NullToken.patch, 
 HDFS-6776.006.NullToken.patch, HDFS-6776.007.patch, HDFS-6776.008.patch, 
 HDFS-6776.009.patch, HDFS-6776.010.patch, HDFS-6776.011.patch, 
 dummy-token-proxy.js


 Issuing distcp command at the secure cluster side, trying to copy stuff from 
 insecure cluster to secure cluster, and see the following problem:
 {code}
 hadoopuser@yjc5u-1 ~]$ hadoop distcp webhdfs://insure-cluster:port/tmp 
 hdfs://sure-cluster:8020/tmp/tmptgt
 14/07/30 20:06:19 INFO tools.DistCp: Input Options: 
 DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
 ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
 copyStrategy='uniformsize', sourceFileListing=null, 
 sourcePaths=[webhdfs://insecure-cluster:port/tmp], 
 targetPath=hdfs://secure-cluster:8020/tmp/tmptgt, targetPathExists=true}
 14/07/30 20:06:19 INFO client.RMProxy: Connecting to ResourceManager at 
 secure-clister:8032
 14/07/30 20:06:20 WARN ssl.FileBasedKeyStoresFactory: The property 
 'ssl.client.truststore.location' has not been set, no TrustStore will be 
 loaded
 14/07/30 20:06:20 WARN security.UserGroupInformation: 
 PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
 cause:java.io.IOException: Failed to get the token for hadoopuser, 
 user=hadoopuser
 14/07/30 20:06:20 WARN security.UserGroupInformation: 
 PriviledgedActionException as:hadoopu...@xyz.com (auth:KERBEROS) 
 cause:java.io.IOException: Failed to get the token for hadoopuser, 
 user=hadoopuser
 14/07/30 20:06:20 ERROR tools.DistCp: Exception encountered 
 java.io.IOException: Failed to get the token for hadoopuser, user=hadoopuser
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
   at 
 org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
   at 
 org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:365)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$600(WebHdfsFileSystem.java:84)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:618)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:584)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:438)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:466)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:462)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1132)
   at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:218)
   at 
 

[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2014-09-06 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124575#comment-14124575
 ] 

Haohui Mai commented on HDFS-6994:
--

bq. About the boost, your are right. Actually boost is not required if the C++ 
compiler is not too old. And I also think using boost can make libhdfs3 be 
useful for as many people as possible who use the old C++ compiler. But, yes, I 
should not require a very new boost version, it can be improved as well as 
other dependency issues.

I think that the main goal is to have a clean-slate, modern, and easy to 
maintain library. Modern language features in C++ is a great leverage to 
approach the goal.

I might be over optimistic, personally I don't think old compiler is that big 
of an issue -- CentOS 7 already has gcc 4.4 by default, and it is quite easy to 
install clang on build machines. Clang is production ready for c++11.

 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: Task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2014-09-06 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124579#comment-14124579
 ] 

Haohui Mai commented on HDFS-6994:
--

bq. This adds operational complexity for non-Java clients that just want to 
integrate with HDFS.

Agree. I think the use case for this library is slightly different than libhdfs 
/ libndfs. It is quite beneficial to have a full native client that has zero 
dependency on the Java side. One use case is to use the library to verify the 
wire compatibility of the HDFS protocols.

Full functional parity is great, but I don't think it should be the main focus 
for libhdfs3.

 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: Task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-6940:
-
Fix Version/s: (was: 2.6.0)

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-6940:
-
Affects Version/s: (was: 3.0.0)
   2.0.6-alpha
   2.5.0

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-6940:
-
Fix Version/s: 2.6.0

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124589#comment-14124589
 ] 

Konstantin Boudnik commented on HDFS-6940:
--

Looks good. +1

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HDFS-6940:
-
Target Version/s: 2.6.0  (was: 3.0.0)

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6981) DN upgrade with layout version change should not use trash

2014-09-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124602#comment-14124602
 ] 

Arpit Agarwal commented on HDFS-6981:
-

Latest test failures are unrelated to the patch.

 DN upgrade with layout version change should not use trash
 --

 Key: HDFS-6981
 URL: https://issues.apache.org/jira/browse/HDFS-6981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: James Thomas
Assignee: Arpit Agarwal
 Attachments: HDFS-6981.01.patch, HDFS-6981.02.patch, 
 HDFS-6981.03.patch, HDFS-6981.04.patch, HDFS-6981.05.patch, HDFS-6981.06.patch


 Post HDFS-6800, we can encounter the following scenario:
 # We start with DN software version -55 and initiate a rolling upgrade to 
 version -56
 # We delete some blocks, and they are moved to trash
 # We roll back to DN software version -55 using the -rollback flag – since we 
 are running the old code (prior to this patch), we will restore the previous 
 directory but will not delete the trash
 # We append to some of the blocks that were deleted in step 2
 # We then restart a DN that contains blocks that were appended to – since the 
 trash still exists, it will be restored at this point, the appended-to blocks 
 will be overwritten, and we will lose the appended data
 So I think we need to avoid writing anything to the trash directory if we 
 have a previous directory.
 Thanks to [~james.thomas] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-6940:
--
   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

I just committed this.

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 2.6.0

 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6981) DN upgrade with layout version change should not use trash

2014-09-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6981:

Attachment: (was: HDFS-6981.07.patch)

 DN upgrade with layout version change should not use trash
 --

 Key: HDFS-6981
 URL: https://issues.apache.org/jira/browse/HDFS-6981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: James Thomas
Assignee: Arpit Agarwal
 Attachments: HDFS-6981.01.patch, HDFS-6981.02.patch, 
 HDFS-6981.03.patch, HDFS-6981.04.patch, HDFS-6981.05.patch, HDFS-6981.06.patch


 Post HDFS-6800, we can encounter the following scenario:
 # We start with DN software version -55 and initiate a rolling upgrade to 
 version -56
 # We delete some blocks, and they are moved to trash
 # We roll back to DN software version -55 using the -rollback flag – since we 
 are running the old code (prior to this patch), we will restore the previous 
 directory but will not delete the trash
 # We append to some of the blocks that were deleted in step 2
 # We then restart a DN that contains blocks that were appended to – since the 
 trash still exists, it will be restored at this point, the appended-to blocks 
 will be overwritten, and we will lose the appended data
 So I think we need to avoid writing anything to the trash directory if we 
 have a previous directory.
 Thanks to [~james.thomas] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6981) DN upgrade with layout version change should not use trash

2014-09-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6981:

Attachment: HDFS-6981.07.patch

Simplified the test case. Looks like the allocation unit adjustment on ext4 
works differently, so fixed the test to not require it. 

 DN upgrade with layout version change should not use trash
 --

 Key: HDFS-6981
 URL: https://issues.apache.org/jira/browse/HDFS-6981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: James Thomas
Assignee: Arpit Agarwal
 Attachments: HDFS-6981.01.patch, HDFS-6981.02.patch, 
 HDFS-6981.03.patch, HDFS-6981.04.patch, HDFS-6981.05.patch, HDFS-6981.06.patch


 Post HDFS-6800, we can encounter the following scenario:
 # We start with DN software version -55 and initiate a rolling upgrade to 
 version -56
 # We delete some blocks, and they are moved to trash
 # We roll back to DN software version -55 using the -rollback flag – since we 
 are running the old code (prior to this patch), we will restore the previous 
 directory but will not delete the trash
 # We append to some of the blocks that were deleted in step 2
 # We then restart a DN that contains blocks that were appended to – since the 
 trash still exists, it will be restored at this point, the appended-to blocks 
 will be overwritten, and we will lose the appended data
 So I think we need to avoid writing anything to the trash directory if we 
 have a previous directory.
 Thanks to [~james.thomas] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6898) DN must reserve space for a full block when an RBW block is created

2014-09-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6898:

Attachment: HDFS-6898.07.patch

Simplified the test case. Looks like the allocation unit adjustment on ext4 
works differently, so fixed the test to not require it.

 DN must reserve space for a full block when an RBW block is created
 ---

 Key: HDFS-6898
 URL: https://issues.apache.org/jira/browse/HDFS-6898
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0
Reporter: Gopal V
Assignee: Arpit Agarwal
 Attachments: HDFS-6898.01.patch, HDFS-6898.03.patch, 
 HDFS-6898.04.patch, HDFS-6898.05.patch, HDFS-6898.06.patch, HDFS-6898.07.patch


 DN will successfully create two RBW blocks on the same volume even if the 
 free space is sufficient for just one full block.
 One or both block writers may subsequently get a DiskOutOfSpace exception. 
 This can be avoided by allocating space up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HDFS-6981) DN upgrade with layout version change should not use trash

2014-09-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6981:

Comment: was deleted

(was: Simplified the test case. Looks like the allocation unit adjustment on 
ext4 works differently, so fixed the test to not require it. )

 DN upgrade with layout version change should not use trash
 --

 Key: HDFS-6981
 URL: https://issues.apache.org/jira/browse/HDFS-6981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: James Thomas
Assignee: Arpit Agarwal
 Attachments: HDFS-6981.01.patch, HDFS-6981.02.patch, 
 HDFS-6981.03.patch, HDFS-6981.04.patch, HDFS-6981.05.patch, HDFS-6981.06.patch


 Post HDFS-6800, we can encounter the following scenario:
 # We start with DN software version -55 and initiate a rolling upgrade to 
 version -56
 # We delete some blocks, and they are moved to trash
 # We roll back to DN software version -55 using the -rollback flag – since we 
 are running the old code (prior to this patch), we will restore the previous 
 directory but will not delete the trash
 # We append to some of the blocks that were deleted in step 2
 # We then restart a DN that contains blocks that were appended to – since the 
 trash still exists, it will be restored at this point, the appended-to blocks 
 will be overwritten, and we will lose the appended data
 So I think we need to avoid writing anything to the trash directory if we 
 have a previous directory.
 Thanks to [~james.thomas] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6997) Archival Storage: add more tests for data migration and replicaion

2014-09-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6997:
--
Attachment: h6997_20140907.patch

h6997_20140907.patch: fixes some bugs.

Still having a problem that Mover may not terminate in some cases.  Will fix it 
separately.

 Archival Storage: add more tests for data migration and replicaion
 --

 Key: HDFS-6997
 URL: https://issues.apache.org/jira/browse/HDFS-6997
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer, namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: h6997_20140905.patch, h6997_20140905b.patch, 
 h6997_20140907.patch


 This JIRA is to add more tests to check if the data migration tool could move 
 the replicas correctly and if the replication monitor could replicate block 
 correctly when storage policy is considered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124655#comment-14124655
 ] 

Aaron T. Myers commented on HDFS-6940:
--

bq. I have a great idea Aaron T. Myers - let's in fact do everything as 
plugins! For example 2.4.0 release introduced 3 backward incompatible fixes 
that broke at least two huge components in the downsteam. In fact, we are 
catching stuff like that in Bigtop all the time. I am sure it could've been 
avoided if we only we had a better plugin contracts for everything that depends 
on the Hadoop bits.

Being sarcastic is not at all helpful for this discussion.

Guys, I'm really not happy that this was committed to trunk/branch-2 without 
waiting for the discussion on that subject to finish. That's not the way 
consensus development works. I don't see why making these changes first on 
trunk is going to make merging any easier. Can you explain how that would be 
the case?

I'm trying to respectfully work with you guys here so this solution can get 
implemented in a way that's amenable to everyone, and I really hate to throw 
around -1's or revert things, but you're not making it easy. Please consider 
reverting this from trunk/branch-2 yourself and just make the changes on the 
branch.

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 2.6.0

 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124658#comment-14124658
 ] 

Konstantin Boudnik commented on HDFS-6940:
--

bq. Being sarcastic is not at all helpful for this discussion.
Let's stick to the technical matter on the JIRA. If you feel an urge to lecture 
me on my moral qualities - send me a personal email.


 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 2.6.0

 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124661#comment-14124661
 ] 

Aaron T. Myers commented on HDFS-6940:
--

Konst, I think you're taking this personally when it was not intended as such. 
Commenting on communication style is not lecturing you on a moral quality.

Can you please explain why doing this refactor, which is really mostly just 
opening up visibility of a host of methods, will make merging from/to trunk 
easier?

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 2.6.0

 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124665#comment-14124665
 ] 

Konstantin Boudnik commented on HDFS-6940:
--

bq. Konst, I think you're taking this personally when it was not intended as 
such. Commenting on communication style is not lecturing you on a moral quality.
I am not taking this personally. I simply offered to look at a wider 
application of the plugin methodology to provide certain guarantees for ABI 
(and API) compatibility. You called this a sarcasm. Hence, I am simply asking 
to restrict the exchange to the technical merits of the matter, without passing 
subjective judgment on my communication style.

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 2.6.0

 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124673#comment-14124673
 ] 

Aaron T. Myers commented on HDFS-6940:
--

We can drop the discussion of respectful communication style from here if you 
want.

Can you please answer the following question:

{quote}
Can you please explain why doing this refactor, which is really mostly just 
opening up visibility of a host of methods, will make merging from/to trunk 
easier?
{quote}

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 2.6.0

 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6940) Initial refactoring to allow ConsensusNode implementation

2014-09-06 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124674#comment-14124674
 ] 

Konstantin Boudnik commented on HDFS-6940:
--

bq. We can drop the discussion of respectful communication style from here if 
you want.
Again, a subjective judgment. Please move this into the personal email if you 
have something to express personally.

 Initial refactoring to allow ConsensusNode implementation
 -

 Key: HDFS-6940
 URL: https://issues.apache.org/jira/browse/HDFS-6940
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.0.6-alpha, 2.5.0
Reporter: Konstantin Shvachko
Assignee: Konstantin Shvachko
 Fix For: 2.6.0

 Attachments: HDFS-6940.patch


 Minor refactoring of FSNamesystem to open private methods that are needed for 
 CNode implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6997) Archival Storage: add more tests for data migration and replicaion

2014-09-06 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124691#comment-14124691
 ] 

Jing Zhao commented on HDFS-6997:
-

bq. Will fix it separately.

Yeah, we can fix it in a separate jira. +1 for the latest patch.

 Archival Storage: add more tests for data migration and replicaion
 --

 Key: HDFS-6997
 URL: https://issues.apache.org/jira/browse/HDFS-6997
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer, namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Attachments: h6997_20140905.patch, h6997_20140905b.patch, 
 h6997_20140907.patch


 This JIRA is to add more tests to check if the data migration tool could move 
 the replicas correctly and if the replication monitor could replicate block 
 correctly when storage policy is considered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7025) HDFS Credential Provider related Unit Test Failure

2014-09-06 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-7025:


 Summary: HDFS Credential Provider related  Unit Test Failure
 Key: HDFS-7025
 URL: https://issues.apache.org/jira/browse/HDFS-7025
 Project: Hadoop HDFS
  Issue Type: Test
  Components: encryption
Affects Versions: 2.4.1
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Critical


Reported by: Xiaomara and investigated by [~cnauroth].

The credential provider related unit tests failed on Windows. The tests try to 
set up a URI by taking the build test directory and concatenating it with other 
strings containing the rest of the URI format, i.e.:

{code}
  public void testFactory() throws Exception {
Configuration conf = new Configuration();
conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
UserProvider.SCHEME_NAME + :///, +
JavaKeyStoreProvider.SCHEME_NAME + ://file + tmpDir + 
/test.jks);
{code}

This logic is incorrect on Windows, because the file path separator will be 
'\', which violates URI syntax. Forward slash is not permitted. 

The proper fix is to always do path/URI construction through the 
org.apache.hadoop.fs.Path class, specifically using the constructors that take 
explicit parent and child arguments.

The affected unit tests are:

{code}
* TestCryptoAdminCLI
* TestDFSUtil
* TestEncryptionZones
* TestReservedRawPaths
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6898) DN must reserve space for a full block when an RBW block is created

2014-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124694#comment-14124694
 ] 

Hadoop QA commented on HDFS-6898:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12667038/HDFS-6898.07.patch
  against trunk revision 88209ce.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7930//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7930//console

This message is automatically generated.

 DN must reserve space for a full block when an RBW block is created
 ---

 Key: HDFS-6898
 URL: https://issues.apache.org/jira/browse/HDFS-6898
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0
Reporter: Gopal V
Assignee: Arpit Agarwal
 Attachments: HDFS-6898.01.patch, HDFS-6898.03.patch, 
 HDFS-6898.04.patch, HDFS-6898.05.patch, HDFS-6898.06.patch, HDFS-6898.07.patch


 DN will successfully create two RBW blocks on the same volume even if the 
 free space is sufficient for just one full block.
 One or both block writers may subsequently get a DiskOutOfSpace exception. 
 This can be avoided by allocating space up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7025) HDFS Credential Provider related Unit Test Failure

2014-09-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7025:
-
Attachment: HDFS-7025.0.patch

 HDFS Credential Provider related  Unit Test Failure
 ---

 Key: HDFS-7025
 URL: https://issues.apache.org/jira/browse/HDFS-7025
 Project: Hadoop HDFS
  Issue Type: Test
  Components: encryption
Affects Versions: 2.4.1
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Critical
 Attachments: HDFS-7025.0.patch


 Reported by: Xiaomara and investigated by [~cnauroth].
 The credential provider related unit tests failed on Windows. The tests try 
 to set up a URI by taking the build test directory and concatenating it with 
 other strings containing the rest of the URI format, i.e.:
 {code}
   public void testFactory() throws Exception {
 Configuration conf = new Configuration();
 conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
 UserProvider.SCHEME_NAME + :///, +
 JavaKeyStoreProvider.SCHEME_NAME + ://file + tmpDir + 
 /test.jks);
 {code}
 This logic is incorrect on Windows, because the file path separator will be 
 '\', which violates URI syntax. Forward slash is not permitted. 
 The proper fix is to always do path/URI construction through the 
 org.apache.hadoop.fs.Path class, specifically using the constructors that 
 take explicit parent and child arguments.
 The affected unit tests are:
 {code}
 * TestCryptoAdminCLI
 * TestDFSUtil
 * TestEncryptionZones
 * TestReservedRawPaths
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-6997) Archival Storage: add more tests for data migration and replicaion

2014-09-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HDFS-6997.
---
   Resolution: Fixed
Fix Version/s: Archival Storage (HDFS-6584)
 Hadoop Flags: Reviewed

Thanks Jing for reviewing the patch.

I have committed this.

 Archival Storage: add more tests for data migration and replicaion
 --

 Key: HDFS-6997
 URL: https://issues.apache.org/jira/browse/HDFS-6997
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer, namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor
 Fix For: Archival Storage (HDFS-6584)

 Attachments: h6997_20140905.patch, h6997_20140905b.patch, 
 h6997_20140907.patch


 This JIRA is to add more tests to check if the data migration tool could move 
 the replicas correctly and if the replication monitor could replicate block 
 correctly when storage policy is considered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6981) DN upgrade with layout version change should not use trash

2014-09-06 Thread James Thomas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124731#comment-14124731
 ] 

James Thomas commented on HDFS-6981:


{code}
+  if (!storagesWithRollingUpgradeMarker.contains(bpRoot.toString()) 
+  !markerFile.exists()) {
+LOG.info(Created  + markerFile);
+markerFile.createNewFile();
+storagesWithRollingUpgradeMarker.add(bpRoot.toString());
+storagesWithoutRollingUpgradeMarker.remove(bpRoot.toString());
+  }
{code}

could be 

{code}
+  if (!storagesWithRollingUpgradeMarker.contains(bpRoot.toString())) {
+if (!markerFile.exists()) {
+  LOG.info(Created  + markerFile);
+  markerFile.createNewFile();
+  storagesWithRollingUpgradeMarker.add(bpRoot.toString());
+  storagesWithoutRollingUpgradeMarker.remove(bpRoot.toString());
+} else {
+  storagesWithRollingUpgardeMarker.add(bpRoot.toString());
+}
+  }
{code}

and similarly for {{clearRollingUpgradeMarkers}}. These changes ensure that the 
cache is in sync with the filesystem state and reduce the number of filesystem 
operations.

It also seems to me like the in-memory cache could be just be two volatile 
booleans (e.g. {{storagesHaveRollingUpgradeMarker}} and 
{{storagesDoNotHaveRollingUpgradeMarker}}) rather than two sets. Could the set 
of storages possibly change during the rolling upgrade?

Otherwise things look good. Tests are solid.

 DN upgrade with layout version change should not use trash
 --

 Key: HDFS-6981
 URL: https://issues.apache.org/jira/browse/HDFS-6981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: James Thomas
Assignee: Arpit Agarwal
 Attachments: HDFS-6981.01.patch, HDFS-6981.02.patch, 
 HDFS-6981.03.patch, HDFS-6981.04.patch, HDFS-6981.05.patch, HDFS-6981.06.patch


 Post HDFS-6800, we can encounter the following scenario:
 # We start with DN software version -55 and initiate a rolling upgrade to 
 version -56
 # We delete some blocks, and they are moved to trash
 # We roll back to DN software version -55 using the -rollback flag – since we 
 are running the old code (prior to this patch), we will restore the previous 
 directory but will not delete the trash
 # We append to some of the blocks that were deleted in step 2
 # We then restart a DN that contains blocks that were appended to – since the 
 trash still exists, it will be restored at this point, the appended-to blocks 
 will be overwritten, and we will lose the appended data
 So I think we need to avoid writing anything to the trash directory if we 
 have a previous directory.
 Thanks to [~james.thomas] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7025) HDFS Credential Provider related Unit Test Failure

2014-09-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-7025:

Status: Patch Available  (was: Open)

 HDFS Credential Provider related  Unit Test Failure
 ---

 Key: HDFS-7025
 URL: https://issues.apache.org/jira/browse/HDFS-7025
 Project: Hadoop HDFS
  Issue Type: Test
  Components: encryption
Affects Versions: 2.4.1
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Critical
 Attachments: HDFS-7025.0.patch


 Reported by: Xiaomara and investigated by [~cnauroth].
 The credential provider related unit tests failed on Windows. The tests try 
 to set up a URI by taking the build test directory and concatenating it with 
 other strings containing the rest of the URI format, i.e.:
 {code}
   public void testFactory() throws Exception {
 Configuration conf = new Configuration();
 conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
 UserProvider.SCHEME_NAME + :///, +
 JavaKeyStoreProvider.SCHEME_NAME + ://file + tmpDir + 
 /test.jks);
 {code}
 This logic is incorrect on Windows, because the file path separator will be 
 '\', which violates URI syntax. Forward slash is not permitted. 
 The proper fix is to always do path/URI construction through the 
 org.apache.hadoop.fs.Path class, specifically using the constructors that 
 take explicit parent and child arguments.
 The affected unit tests are:
 {code}
 * TestCryptoAdminCLI
 * TestDFSUtil
 * TestEncryptionZones
 * TestReservedRawPaths
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6898) DN must reserve space for a full block when an RBW block is created

2014-09-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124746#comment-14124746
 ] 

Chris Nauroth commented on HDFS-6898:
-

+1 for patch v7 too.  The failure in {{TestPipelinesFailover}} is unrelated and 
tracked elsewhere.  Thanks again, Arpit.

 DN must reserve space for a full block when an RBW block is created
 ---

 Key: HDFS-6898
 URL: https://issues.apache.org/jira/browse/HDFS-6898
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0
Reporter: Gopal V
Assignee: Arpit Agarwal
 Attachments: HDFS-6898.01.patch, HDFS-6898.03.patch, 
 HDFS-6898.04.patch, HDFS-6898.05.patch, HDFS-6898.06.patch, HDFS-6898.07.patch


 DN will successfully create two RBW blocks on the same volume even if the 
 free space is sufficient for just one full block.
 One or both block writers may subsequently get a DiskOutOfSpace exception. 
 This can be avoided by allocating space up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6981) DN upgrade with layout version change should not use trash

2014-09-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6981:

Attachment: HDFS-6981.07.patch

Thanks for taking another look. I updated the patch.

Yes I believe volumes can be added dynamically (HDFS-6740). The cost of the 
sets is trivial and the states of individual volume are not coupled.


 DN upgrade with layout version change should not use trash
 --

 Key: HDFS-6981
 URL: https://issues.apache.org/jira/browse/HDFS-6981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: James Thomas
Assignee: Arpit Agarwal
 Attachments: HDFS-6981.01.patch, HDFS-6981.02.patch, 
 HDFS-6981.03.patch, HDFS-6981.04.patch, HDFS-6981.05.patch, 
 HDFS-6981.06.patch, HDFS-6981.07.patch


 Post HDFS-6800, we can encounter the following scenario:
 # We start with DN software version -55 and initiate a rolling upgrade to 
 version -56
 # We delete some blocks, and they are moved to trash
 # We roll back to DN software version -55 using the -rollback flag – since we 
 are running the old code (prior to this patch), we will restore the previous 
 directory but will not delete the trash
 # We append to some of the blocks that were deleted in step 2
 # We then restart a DN that contains blocks that were appended to – since the 
 trash still exists, it will be restored at this point, the appended-to blocks 
 will be overwritten, and we will lose the appended data
 So I think we need to avoid writing anything to the trash directory if we 
 have a previous directory.
 Thanks to [~james.thomas] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6898) DN must reserve space for a full block when an RBW block is created

2014-09-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6898:

  Resolution: Fixed
Target Version/s: 2.6.0
  Status: Resolved  (was: Patch Available)

Thanks for the reviews [~cnauroth]! I committed it to trunk and branch-2.

Colin, if you have any additional feedback we can address it in a follow up 
Jira.

 DN must reserve space for a full block when an RBW block is created
 ---

 Key: HDFS-6898
 URL: https://issues.apache.org/jira/browse/HDFS-6898
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0
Reporter: Gopal V
Assignee: Arpit Agarwal
 Attachments: HDFS-6898.01.patch, HDFS-6898.03.patch, 
 HDFS-6898.04.patch, HDFS-6898.05.patch, HDFS-6898.06.patch, HDFS-6898.07.patch


 DN will successfully create two RBW blocks on the same volume even if the 
 free space is sufficient for just one full block.
 One or both block writers may subsequently get a DiskOutOfSpace exception. 
 This can be avoided by allocating space up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7025) HDFS Credential Provider related Unit Test Failure

2014-09-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-7025:

Priority: Major  (was: Critical)
Target Version/s: 2.6.0

 HDFS Credential Provider related  Unit Test Failure
 ---

 Key: HDFS-7025
 URL: https://issues.apache.org/jira/browse/HDFS-7025
 Project: Hadoop HDFS
  Issue Type: Test
  Components: encryption
Affects Versions: 2.4.1
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7025.0.patch


 Reported by: Xiaomara and investigated by [~cnauroth].
 The credential provider related unit tests failed on Windows. The tests try 
 to set up a URI by taking the build test directory and concatenating it with 
 other strings containing the rest of the URI format, i.e.:
 {code}
   public void testFactory() throws Exception {
 Configuration conf = new Configuration();
 conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
 UserProvider.SCHEME_NAME + :///, +
 JavaKeyStoreProvider.SCHEME_NAME + ://file + tmpDir + 
 /test.jks);
 {code}
 This logic is incorrect on Windows, because the file path separator will be 
 '\', which violates URI syntax. Forward slash is not permitted. 
 The proper fix is to always do path/URI construction through the 
 org.apache.hadoop.fs.Path class, specifically using the constructors that 
 take explicit parent and child arguments.
 The affected unit tests are:
 {code}
 * TestCryptoAdminCLI
 * TestDFSUtil
 * TestEncryptionZones
 * TestReservedRawPaths
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7025) HDFS Credential Provider related Unit Test Failure

2014-09-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7025:
-
Attachment: HDFS-7025.1.patch

Update patch with additional fix from [~cnauroth] that solves HDFS test root 
path issue in {code}TestEncryptionZones{code} on Windows. 

 HDFS Credential Provider related  Unit Test Failure
 ---

 Key: HDFS-7025
 URL: https://issues.apache.org/jira/browse/HDFS-7025
 Project: Hadoop HDFS
  Issue Type: Test
  Components: encryption
Affects Versions: 2.4.1
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7025.0.patch, HDFS-7025.1.patch


 Reported by: Xiaomara and investigated by [~cnauroth].
 The credential provider related unit tests failed on Windows. The tests try 
 to set up a URI by taking the build test directory and concatenating it with 
 other strings containing the rest of the URI format, i.e.:
 {code}
   public void testFactory() throws Exception {
 Configuration conf = new Configuration();
 conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
 UserProvider.SCHEME_NAME + :///, +
 JavaKeyStoreProvider.SCHEME_NAME + ://file + tmpDir + 
 /test.jks);
 {code}
 This logic is incorrect on Windows, because the file path separator will be 
 '\', which violates URI syntax. Forward slash is not permitted. 
 The proper fix is to always do path/URI construction through the 
 org.apache.hadoop.fs.Path class, specifically using the constructors that 
 take explicit parent and child arguments.
 The affected unit tests are:
 {code}
 * TestCryptoAdminCLI
 * TestDFSUtil
 * TestEncryptionZones
 * TestReservedRawPaths
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7025) HDFS Credential Provider related Unit Test Failure

2014-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124782#comment-14124782
 ] 

Hadoop QA commented on HDFS-7025:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12667051/HDFS-7025.0.patch
  against trunk revision 88209ce.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7931//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7931//console

This message is automatically generated.

 HDFS Credential Provider related  Unit Test Failure
 ---

 Key: HDFS-7025
 URL: https://issues.apache.org/jira/browse/HDFS-7025
 Project: Hadoop HDFS
  Issue Type: Test
  Components: encryption
Affects Versions: 2.4.1
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7025.0.patch, HDFS-7025.1.patch


 Reported by: Xiaomara and investigated by [~cnauroth].
 The credential provider related unit tests failed on Windows. The tests try 
 to set up a URI by taking the build test directory and concatenating it with 
 other strings containing the rest of the URI format, i.e.:
 {code}
   public void testFactory() throws Exception {
 Configuration conf = new Configuration();
 conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
 UserProvider.SCHEME_NAME + :///, +
 JavaKeyStoreProvider.SCHEME_NAME + ://file + tmpDir + 
 /test.jks);
 {code}
 This logic is incorrect on Windows, because the file path separator will be 
 '\', which violates URI syntax. Forward slash is not permitted. 
 The proper fix is to always do path/URI construction through the 
 org.apache.hadoop.fs.Path class, specifically using the constructors that 
 take explicit parent and child arguments.
 The affected unit tests are:
 {code}
 * TestCryptoAdminCLI
 * TestDFSUtil
 * TestEncryptionZones
 * TestReservedRawPaths
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6981) DN upgrade with layout version change should not use trash

2014-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14124785#comment-14124785
 ] 

Hadoop QA commented on HDFS-6981:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12667061/HDFS-6981.07.patch
  against trunk revision 88209ce.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7932//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7932//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7932//console

This message is automatically generated.

 DN upgrade with layout version change should not use trash
 --

 Key: HDFS-6981
 URL: https://issues.apache.org/jira/browse/HDFS-6981
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: James Thomas
Assignee: Arpit Agarwal
 Attachments: HDFS-6981.01.patch, HDFS-6981.02.patch, 
 HDFS-6981.03.patch, HDFS-6981.04.patch, HDFS-6981.05.patch, 
 HDFS-6981.06.patch, HDFS-6981.07.patch


 Post HDFS-6800, we can encounter the following scenario:
 # We start with DN software version -55 and initiate a rolling upgrade to 
 version -56
 # We delete some blocks, and they are moved to trash
 # We roll back to DN software version -55 using the -rollback flag – since we 
 are running the old code (prior to this patch), we will restore the previous 
 directory but will not delete the trash
 # We append to some of the blocks that were deleted in step 2
 # We then restart a DN that contains blocks that were appended to – since the 
 trash still exists, it will be restored at this point, the appended-to blocks 
 will be overwritten, and we will lose the appended data
 So I think we need to avoid writing anything to the trash directory if we 
 have a previous directory.
 Thanks to [~james.thomas] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)