[jira] [Created] (HDFS-4770) need log out an extra info in DFSOutputstream

2013-04-27 Thread Keyao Jin (JIRA)
Keyao Jin created HDFS-4770:
---

 Summary: need log out an extra info in DFSOutputstream
 Key: HDFS-4770
 URL: https://issues.apache.org/jira/browse/HDFS-4770
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Keyao Jin
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4770) need log out an extra info in DFSOutputstream

2013-04-27 Thread Keyao Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keyao Jin updated HDFS-4770:


Status: Patch Available  (was: Open)

 need log out an extra info in DFSOutputstream
 -

 Key: HDFS-4770
 URL: https://issues.apache.org/jira/browse/HDFS-4770
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Keyao Jin
Priority: Minor



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4770) need log out an extra info in DFSOutputstream

2013-04-27 Thread Keyao Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keyao Jin updated HDFS-4770:


Description: need log out an extra info in DFSOutputstream

 need log out an extra info in DFSOutputstream
 -

 Key: HDFS-4770
 URL: https://issues.apache.org/jira/browse/HDFS-4770
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Keyao Jin
Priority: Minor
 Attachments: HDFS.patch


 need log out an extra info in DFSOutputstream

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4770) need log out an extra info in DFSOutputstream

2013-04-27 Thread Keyao Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keyao Jin updated HDFS-4770:


   Environment: need log out an extra info in DFSOutputstream
  Target Version/s: 2.0.2-alpha
 Affects Version/s: 2.0.2-alpha
  Tags: need log out an extra info in DFSOutputstream
 Fix Version/s: 2.0.2-alpha
Labels: DFSOutputstream an extra in info log need out  (was: )
Remaining Estimate: 240h
 Original Estimate: 240h
  Release Note: need log out an extra info in DFSOutputstream

need log out an extra info in DFSOutputstream

 need log out an extra info in DFSOutputstream
 -

 Key: HDFS-4770
 URL: https://issues.apache.org/jira/browse/HDFS-4770
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.2-alpha
 Environment: need log out an extra info in DFSOutputstream
Reporter: Keyao Jin
Priority: Minor
  Labels: DFSOutputstream, an, extra, in, info, log, need, out
 Fix For: 2.0.2-alpha

 Attachments: HDFS.patch

   Original Estimate: 240h
  Remaining Estimate: 240h

 need log out an extra info in DFSOutputstream

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4770) need log out an extra info in DFSOutputstream

2013-04-27 Thread Keyao Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keyao Jin updated HDFS-4770:


Attachment: HDFS.patch

 need log out an extra info in DFSOutputstream
 -

 Key: HDFS-4770
 URL: https://issues.apache.org/jira/browse/HDFS-4770
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Keyao Jin
Priority: Minor
 Attachments: HDFS.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4770) need log out an extra info in DFSOutputstream

2013-04-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643614#comment-13643614
 ] 

Hadoop QA commented on HDFS-4770:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580825/HDFS.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4332//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4332//console

This message is automatically generated.

 need log out an extra info in DFSOutputstream
 -

 Key: HDFS-4770
 URL: https://issues.apache.org/jira/browse/HDFS-4770
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.2-alpha
 Environment: need log out an extra info in DFSOutputstream
Reporter: Keyao Jin
Priority: Minor
  Labels: DFSOutputstream, an, extra, in, info, log, need, out
 Fix For: 2.0.2-alpha

 Attachments: HDFS.patch

   Original Estimate: 240h
  Remaining Estimate: 240h

 need log out an extra info in DFSOutputstream

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4721) Speed up lease/block recovery when DN fails and a block goes into recovery

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643622#comment-13643622
 ] 

Hudson commented on HDFS-4721:
--

Integrated in Hadoop-Yarn-trunk #196 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/196/])
HDFS-4721. Speed up lease recovery by avoiding stale datanodes and choosing 
the datanode with the most recent heartbeat as the primary.  Contributed by 
Varun Sharma (Revision 1476399)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476399
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfoUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHeartbeatHandling.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java


 Speed up lease/block recovery when DN fails and a block goes into recovery
 --

 Key: HDFS-4721
 URL: https://issues.apache.org/jira/browse/HDFS-4721
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Varun Sharma
Assignee: Varun Sharma
 Fix For: 2.0.5-beta

 Attachments: 4721-branch2.patch, 4721-trunk.patch, 
 4721-trunk-v2.patch, 4721-trunk-v3.patch, 4721-trunk-v4.patch, 4721-v2.patch, 
 4721-v3.patch, 4721-v4.patch, 4721-v5.patch, 4721-v6.patch, 4721-v7.patch, 
 4721-v8.patch


 This was observed while doing HBase WAL recovery. HBase uses append to write 
 to its write ahead log. So initially the pipeline is setup as
 DN1 -- DN2 -- DN3
 This WAL needs to be read when DN1 fails since it houses the HBase 
 regionserver for the WAL.
 HBase first recovers the lease on the WAL file. During recovery, we choose 
 DN1 as the primary DN to do the recovery even though DN1 has failed and is 
 not heartbeating any more.
 Avoiding the stale DN1 would speed up recovery and reduce hbase MTTR. There 
 are two options.
 a) Ride on HDFS 3703 and if stale node detection is turned on, we do not 
 choose stale datanodes (typically not heart beated for 20-30 seconds) as 
 primary DN(s)
 b) We sort the replicas in order of last heart beat and always pick the ones 
 which gave the most recent heart beat
 Going to the dead datanode increases lease + block recovery since the block 
 goes into UNDER_RECOVERY state even though no one is recovering it actively. 
 Please let me know if this makes sense. If yes, whether we should move 
 forward with a) or b).
 Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643623#comment-13643623
 ] 

Hudson commented on HDFS-2576:
--

Integrated in Hadoop-Yarn-trunk #196 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/196/])
HDFS-2576. Enhances the DistributedFileSystem's create API so that clients 
can specify favored datanodes for a file's blocks. Contributed by Devaraj Das 
and Pritam Damania. (Revision 1476395)

 Result = SUCCESS
ddas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476395
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java


 Namenode should have a favored nodes hint to enable clients to have control 
 over block placement.
 -

 Key: HDFS-2576
 URL: https://issues.apache.org/jira/browse/HDFS-2576
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, namenode
Reporter: Pritam Damania
Assignee: Devaraj Das
 Fix For: 2.0.5-beta

 Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch, 
 hdfs-2576-trunk-2.patch, hdfs-2576-trunk-7.1.patch, hdfs-2576-trunk-7.patch, 
 hdfs-2576-trunk-8.1.patch, hdfs-2576-trunk-8.2.patch, 
 hdfs-2576-trunk-8.3.patch, hdfs-2576-trunk-8.patch


 Sometimes Clients like HBase are required to dynamically compute the 
 datanodes it wishes to place the blocks for a file for higher level of 
 locality. For this purpose there is a need of a way to give the Namenode a 
 hint in terms of a favoredNodes parameter about the locations where the 
 client wants to put each block. The proposed solution is a favored nodes 
 parameter in the addBlock() method and in the create() file method to enable 
 the clients to give the hints to the NameNode about the locations of each 
 replica of the block. Note that this would be just a hint and finally the 
 NameNode would look at disk usage, datanode load etc. and decide whether it 
 can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4761) Refresh INodeMap in FSDirectory#reset()

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643627#comment-13643627
 ] 

Hudson commented on HDFS-4761:
--

Integrated in Hadoop-Yarn-trunk #196 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/196/])
HDFS-4761. When resetting FSDirectory, the inodeMap should also be reset.  
Contributed by Jing Zhao (Revision 1476452)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476452
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java


 Refresh INodeMap in FSDirectory#reset()
 ---

 Key: HDFS-4761
 URL: https://issues.apache.org/jira/browse/HDFS-4761
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-4761.001.patch


 When resetting FSDirectory, the inodeMap should also be reset. I.e., we 
 should clear the inodeMap and then put in the new root node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4721) Speed up lease/block recovery when DN fails and a block goes into recovery

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643653#comment-13643653
 ] 

Hudson commented on HDFS-4721:
--

Integrated in Hadoop-Hdfs-trunk #1385 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1385/])
HDFS-4721. Speed up lease recovery by avoiding stale datanodes and choosing 
the datanode with the most recent heartbeat as the primary.  Contributed by 
Varun Sharma (Revision 1476399)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476399
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfoUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHeartbeatHandling.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java


 Speed up lease/block recovery when DN fails and a block goes into recovery
 --

 Key: HDFS-4721
 URL: https://issues.apache.org/jira/browse/HDFS-4721
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Varun Sharma
Assignee: Varun Sharma
 Fix For: 2.0.5-beta

 Attachments: 4721-branch2.patch, 4721-trunk.patch, 
 4721-trunk-v2.patch, 4721-trunk-v3.patch, 4721-trunk-v4.patch, 4721-v2.patch, 
 4721-v3.patch, 4721-v4.patch, 4721-v5.patch, 4721-v6.patch, 4721-v7.patch, 
 4721-v8.patch


 This was observed while doing HBase WAL recovery. HBase uses append to write 
 to its write ahead log. So initially the pipeline is setup as
 DN1 -- DN2 -- DN3
 This WAL needs to be read when DN1 fails since it houses the HBase 
 regionserver for the WAL.
 HBase first recovers the lease on the WAL file. During recovery, we choose 
 DN1 as the primary DN to do the recovery even though DN1 has failed and is 
 not heartbeating any more.
 Avoiding the stale DN1 would speed up recovery and reduce hbase MTTR. There 
 are two options.
 a) Ride on HDFS 3703 and if stale node detection is turned on, we do not 
 choose stale datanodes (typically not heart beated for 20-30 seconds) as 
 primary DN(s)
 b) We sort the replicas in order of last heart beat and always pick the ones 
 which gave the most recent heart beat
 Going to the dead datanode increases lease + block recovery since the block 
 goes into UNDER_RECOVERY state even though no one is recovering it actively. 
 Please let me know if this makes sense. If yes, whether we should move 
 forward with a) or b).
 Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643654#comment-13643654
 ] 

Hudson commented on HDFS-2576:
--

Integrated in Hadoop-Hdfs-trunk #1385 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1385/])
HDFS-2576. Enhances the DistributedFileSystem's create API so that clients 
can specify favored datanodes for a file's blocks. Contributed by Devaraj Das 
and Pritam Damania. (Revision 1476395)

 Result = FAILURE
ddas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476395
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java


 Namenode should have a favored nodes hint to enable clients to have control 
 over block placement.
 -

 Key: HDFS-2576
 URL: https://issues.apache.org/jira/browse/HDFS-2576
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, namenode
Reporter: Pritam Damania
Assignee: Devaraj Das
 Fix For: 2.0.5-beta

 Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch, 
 hdfs-2576-trunk-2.patch, hdfs-2576-trunk-7.1.patch, hdfs-2576-trunk-7.patch, 
 hdfs-2576-trunk-8.1.patch, hdfs-2576-trunk-8.2.patch, 
 hdfs-2576-trunk-8.3.patch, hdfs-2576-trunk-8.patch


 Sometimes Clients like HBase are required to dynamically compute the 
 datanodes it wishes to place the blocks for a file for higher level of 
 locality. For this purpose there is a need of a way to give the Namenode a 
 hint in terms of a favoredNodes parameter about the locations where the 
 client wants to put each block. The proposed solution is a favored nodes 
 parameter in the addBlock() method and in the create() file method to enable 
 the clients to give the hints to the NameNode about the locations of each 
 replica of the block. Note that this would be just a hint and finally the 
 NameNode would look at disk usage, datanode load etc. and decide whether it 
 can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4761) Refresh INodeMap in FSDirectory#reset()

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643658#comment-13643658
 ] 

Hudson commented on HDFS-4761:
--

Integrated in Hadoop-Hdfs-trunk #1385 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1385/])
HDFS-4761. When resetting FSDirectory, the inodeMap should also be reset.  
Contributed by Jing Zhao (Revision 1476452)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476452
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java


 Refresh INodeMap in FSDirectory#reset()
 ---

 Key: HDFS-4761
 URL: https://issues.apache.org/jira/browse/HDFS-4761
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-4761.001.patch


 When resetting FSDirectory, the inodeMap should also be reset. I.e., we 
 should clear the inodeMap and then put in the new root node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4767) directory is not snapshottable after clrQuota

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643660#comment-13643660
 ] 

Hudson commented on HDFS-4767:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #170 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/170/])
HDFS-4767. If a directory is snapshottable, do not replace it when clearing 
quota.  Contributed by Jing Zhao (Revision 1476454)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476454
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSetQuotaWithSnapshot.java


 directory is not snapshottable after clrQuota
 -

 Key: HDFS-4767
 URL: https://issues.apache.org/jira/browse/HDFS-4767
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Ramya Sunil
Assignee: Jing Zhao
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4767.001.patch


 1. hadoop dfs -mkdir /user/foo/hdfs-snapshots
 2. hadoop dfsadmin -setQuota 1 /user/foo/hdfs-snapshots
 3. hadoop dfsadmin -allowSnapshot /user/foo/hdfs-snapshots
 Allowing snaphot on /user/foo/hdfs-snapshots succeeded
 4. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
 createSnapshot: org.apache.hadoop.hdfs.protocol.NSQuotaExceededException: The 
 NameSpace quota (directories and files) is exceeded: quota=1 file count=2
 5. hadoop dfsadmin -clrQuota /user/foo/hdfs-snapshots
 6. hadoop dfs -createSnapshot /user/foo/hdfs-snapshots s1
 createSnapshot: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotException: Directory 
 is not a snapshottable directory: /user/foo/hdfs-snapshots
 Step 6 should have succeeded since the directory was already snapshottable(in 
 step 3)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4650) Add rename test in TestSnapshot

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643661#comment-13643661
 ] 

Hudson commented on HDFS-4650:
--

Integrated in Hadoop-Hdfs-Snapshots-Branch-build #170 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-Snapshots-Branch-build/170/])
HDFS-4650. When passing two non-existing snapshot names to snapshotDiff, it 
returns success if the names are the same.  Contributed by Jing Zhao (Revision 
1476408)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476408
Files : 
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/CHANGES.HDFS-2802.txt
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java
* 
/hadoop/common/branches/HDFS-2802/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java


 Add rename test in TestSnapshot
 ---

 Key: HDFS-4650
 URL: https://issues.apache.org/jira/browse/HDFS-4650
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode, test
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4650.001.patch


 Add more unit tests and update current unit tests to cover different cases 
 for rename with existence of snapshottable directories and snapshots.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4768) block scanner does not close verification log when a block pool is being deleted (but the datanode remains running)

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643667#comment-13643667
 ] 

Suresh Srinivas commented on HDFS-4768:
---

+1 for the patch.

 block scanner does not close verification log when a block pool is being 
 deleted (but the datanode remains running)
 ---

 Key: HDFS-4768
 URL: https://issues.apache.org/jira/browse/HDFS-4768
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4768.1.patch


 HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
 by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
 method gets called for each live {{BlockPoolSliceScanner}} during datanode 
 shutdown.  However, that patch did not consider the case of deleting a block 
 pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
 remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4768) File handle leak in datanode when block pool is removed

2013-04-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4768:
--

Summary: File handle leak in datanode when block pool is removed  (was: 
block scanner does not close verification log when a block pool is being 
deleted (but the datanode remains running))

 File handle leak in datanode when block pool is removed
 ---

 Key: HDFS-4768
 URL: https://issues.apache.org/jira/browse/HDFS-4768
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4768.1.patch


 HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
 by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
 method gets called for each live {{BlockPoolSliceScanner}} during datanode 
 shutdown.  However, that patch did not consider the case of deleting a block 
 pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
 remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4768) File handle leak in datanode when a block pool is removed

2013-04-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4768:
--

Summary: File handle leak in datanode when a block pool is removed  (was: 
File handle leak in datanode when block pool is removed)

 File handle leak in datanode when a block pool is removed
 -

 Key: HDFS-4768
 URL: https://issues.apache.org/jira/browse/HDFS-4768
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4768.1.patch


 HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
 by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
 method gets called for each live {{BlockPoolSliceScanner}} during datanode 
 shutdown.  However, that patch did not consider the case of deleting a block 
 pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
 remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4721) Speed up lease/block recovery when DN fails and a block goes into recovery

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643670#comment-13643670
 ] 

Hudson commented on HDFS-4721:
--

Integrated in Hadoop-Mapreduce-trunk #1412 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1412/])
HDFS-4721. Speed up lease recovery by avoiding stale datanodes and choosing 
the datanode with the most recent heartbeat as the primary.  Contributed by 
Varun Sharma (Revision 1476399)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476399
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockInfoUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockInfoUnderConstruction.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestHeartbeatHandling.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestPipelinesFailover.java


 Speed up lease/block recovery when DN fails and a block goes into recovery
 --

 Key: HDFS-4721
 URL: https://issues.apache.org/jira/browse/HDFS-4721
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.3-alpha
Reporter: Varun Sharma
Assignee: Varun Sharma
 Fix For: 2.0.5-beta

 Attachments: 4721-branch2.patch, 4721-trunk.patch, 
 4721-trunk-v2.patch, 4721-trunk-v3.patch, 4721-trunk-v4.patch, 4721-v2.patch, 
 4721-v3.patch, 4721-v4.patch, 4721-v5.patch, 4721-v6.patch, 4721-v7.patch, 
 4721-v8.patch


 This was observed while doing HBase WAL recovery. HBase uses append to write 
 to its write ahead log. So initially the pipeline is setup as
 DN1 -- DN2 -- DN3
 This WAL needs to be read when DN1 fails since it houses the HBase 
 regionserver for the WAL.
 HBase first recovers the lease on the WAL file. During recovery, we choose 
 DN1 as the primary DN to do the recovery even though DN1 has failed and is 
 not heartbeating any more.
 Avoiding the stale DN1 would speed up recovery and reduce hbase MTTR. There 
 are two options.
 a) Ride on HDFS 3703 and if stale node detection is turned on, we do not 
 choose stale datanodes (typically not heart beated for 20-30 seconds) as 
 primary DN(s)
 b) We sort the replicas in order of last heart beat and always pick the ones 
 which gave the most recent heart beat
 Going to the dead datanode increases lease + block recovery since the block 
 goes into UNDER_RECOVERY state even though no one is recovering it actively. 
 Please let me know if this makes sense. If yes, whether we should move 
 forward with a) or b).
 Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-2576) Namenode should have a favored nodes hint to enable clients to have control over block placement.

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643671#comment-13643671
 ] 

Hudson commented on HDFS-2576:
--

Integrated in Hadoop-Mapreduce-trunk #1412 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1412/])
HDFS-2576. Enhances the DistributedFileSystem's create API so that clients 
can specify favored datanodes for a file's blocks. Contributed by Devaraj Das 
and Pritam Damania. (Revision 1476395)

 Result = SUCCESS
ddas : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476395
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicy.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFavoredNodesEndToEnd.java


 Namenode should have a favored nodes hint to enable clients to have control 
 over block placement.
 -

 Key: HDFS-2576
 URL: https://issues.apache.org/jira/browse/HDFS-2576
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client, namenode
Reporter: Pritam Damania
Assignee: Devaraj Das
 Fix For: 2.0.5-beta

 Attachments: hdfs-2576-1.txt, hdfs-2576-trunk-1.patch, 
 hdfs-2576-trunk-2.patch, hdfs-2576-trunk-7.1.patch, hdfs-2576-trunk-7.patch, 
 hdfs-2576-trunk-8.1.patch, hdfs-2576-trunk-8.2.patch, 
 hdfs-2576-trunk-8.3.patch, hdfs-2576-trunk-8.patch


 Sometimes Clients like HBase are required to dynamically compute the 
 datanodes it wishes to place the blocks for a file for higher level of 
 locality. For this purpose there is a need of a way to give the Namenode a 
 hint in terms of a favoredNodes parameter about the locations where the 
 client wants to put each block. The proposed solution is a favored nodes 
 parameter in the addBlock() method and in the create() file method to enable 
 the clients to give the hints to the NameNode about the locations of each 
 replica of the block. Note that this would be just a hint and finally the 
 NameNode would look at disk usage, datanode load etc. and decide whether it 
 can respect the hints or not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4761) Refresh INodeMap in FSDirectory#reset()

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643675#comment-13643675
 ] 

Hudson commented on HDFS-4761:
--

Integrated in Hadoop-Mapreduce-trunk #1412 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1412/])
HDFS-4761. When resetting FSDirectory, the inodeMap should also be reset.  
Contributed by Jing Zhao (Revision 1476452)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476452
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java


 Refresh INodeMap in FSDirectory#reset()
 ---

 Key: HDFS-4761
 URL: https://issues.apache.org/jira/browse/HDFS-4761
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-4761.001.patch


 When resetting FSDirectory, the inodeMap should also be reset. I.e., we 
 should clear the inodeMap and then put in the new root node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4768) File handle leak in datanode when a block pool is removed

2013-04-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4768:
--

   Resolution: Fixed
Fix Version/s: 2.0.5-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk and branch-2. Thank you Chris.

 File handle leak in datanode when a block pool is removed
 -

 Key: HDFS-4768
 URL: https://issues.apache.org/jira/browse/HDFS-4768
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.0.5-beta

 Attachments: HDFS-4768.1.patch


 HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
 by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
 method gets called for each live {{BlockPoolSliceScanner}} during datanode 
 shutdown.  However, that patch did not consider the case of deleting a block 
 pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
 remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4741) TestStorageRestore#testStorageRestoreFailure fails on Windows

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643677#comment-13643677
 ] 

Suresh Srinivas commented on HDFS-4741:
---

+1 for the change.

 TestStorageRestore#testStorageRestoreFailure fails on Windows
 -

 Key: HDFS-4741
 URL: https://issues.apache.org/jira/browse/HDFS-4741
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HADOOP-4741.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4768) File handle leak in datanode when a block pool is removed

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643678#comment-13643678
 ] 

Hudson commented on HDFS-4768:
--

Integrated in Hadoop-trunk-Commit #3676 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3676/])
HDFS-4768. File handle leak in datanode when a block pool is removed. 
Contributed by Chris Nauroth. (Revision 1476579)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476579
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataBlockScanner.java


 File handle leak in datanode when a block pool is removed
 -

 Key: HDFS-4768
 URL: https://issues.apache.org/jira/browse/HDFS-4768
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.0.4-alpha
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.0.5-beta

 Attachments: HDFS-4768.1.patch


 HDFS-4274 fixed a file handle leak of the block scanner's verification logs 
 by adding method {{BlockPoolSliceScanner#shutdown}} and guaranteeing that the 
 method gets called for each live {{BlockPoolSliceScanner}} during datanode 
 shutdown.  However, that patch did not consider the case of deleting a block 
 pool via {{ClientDatanodeProtocol#deleteBlockPool}} while the datanode 
 remains running.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4741) TestStorageRestore#testStorageRestoreFailure fails on Windows

2013-04-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4741:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I committed the patch to trunk. Thank you Arpit.

 TestStorageRestore#testStorageRestoreFailure fails on Windows
 -

 Key: HDFS-4741
 URL: https://issues.apache.org/jira/browse/HDFS-4741
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HADOOP-4741.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4741) TestStorageRestore#testStorageRestoreFailure fails on Windows

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643682#comment-13643682
 ] 

Hudson commented on HDFS-4741:
--

Integrated in Hadoop-trunk-Commit #3677 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3677/])
HDFS-4741. TestStorageRestore#testStorageRestoreFailure fails on Windows. 
Contributed by Arpit Agarwal. (Revision 1476585)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476585
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java


 TestStorageRestore#testStorageRestoreFailure fails on Windows
 -

 Key: HDFS-4741
 URL: https://issues.apache.org/jira/browse/HDFS-4741
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HADOOP-4741.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4748) MiniJournalCluster#restartJournalNode leaks resources, which causes sporadic test failures

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643683#comment-13643683
 ] 

Suresh Srinivas commented on HDFS-4748:
---

+1 for the patch.

 MiniJournalCluster#restartJournalNode leaks resources, which causes sporadic 
 test failures
 --

 Key: HDFS-4748
 URL: https://issues.apache.org/jira/browse/HDFS-4748
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: qjm, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HDFS-4748.1.patch


 {{MiniJournalCluster#restartJournalNode}} stops a {{JournalNode}} and then 
 recreates a new one with the same configuration.  However, it does not 
 maintain a reference to the new {{JournalNode}} instance, so therefore it 
 doesn't get stopped inside {{MiniJournalCluster#shutdown}}.  The 
 {{JournalNode}} holds a file lock on its underlying storage, so this can 
 cause sporadic failures in tests like {{TestQuorumJournalManager}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4748) MiniJournalCluster#restartJournalNode leaks resources, which causes sporadic test failures

2013-04-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4748:
--

   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   2.0.5-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk and branch-2.

Thank you Chris! Todd, thank you for the review.

 MiniJournalCluster#restartJournalNode leaks resources, which causes sporadic 
 test failures
 --

 Key: HDFS-4748
 URL: https://issues.apache.org/jira/browse/HDFS-4748
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: qjm, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.0.5-beta

 Attachments: HDFS-4748.1.patch


 {{MiniJournalCluster#restartJournalNode}} stops a {{JournalNode}} and then 
 recreates a new one with the same configuration.  However, it does not 
 maintain a reference to the new {{JournalNode}} instance, so therefore it 
 doesn't get stopped inside {{MiniJournalCluster#shutdown}}.  The 
 {{JournalNode}} holds a file lock on its underlying storage, so this can 
 cause sporadic failures in tests like {{TestQuorumJournalManager}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4748) MiniJournalCluster#restartJournalNode leaks resources, which causes sporadic test failures

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643686#comment-13643686
 ] 

Hudson commented on HDFS-4748:
--

Integrated in Hadoop-trunk-Commit #3678 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3678/])
HDFS-4748. MiniJournalCluster#restartJournalNode leaks resources, which 
causes sporadic test failures. Contributed by Chris Nauroth. (Revision 1476587)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476587
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java


 MiniJournalCluster#restartJournalNode leaks resources, which causes sporadic 
 test failures
 --

 Key: HDFS-4748
 URL: https://issues.apache.org/jira/browse/HDFS-4748
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: qjm, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.0.5-beta

 Attachments: HDFS-4748.1.patch


 {{MiniJournalCluster#restartJournalNode}} stops a {{JournalNode}} and then 
 recreates a new one with the same configuration.  However, it does not 
 maintain a reference to the new {{JournalNode}} instance, so therefore it 
 doesn't get stopped inside {{MiniJournalCluster#shutdown}}.  The 
 {{JournalNode}} holds a file lock on its underlying storage, so this can 
 cause sporadic failures in tests like {{TestQuorumJournalManager}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4743) TestNNStorageRetentionManager fails on Windows

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643688#comment-13643688
 ] 

Suresh Srinivas commented on HDFS-4743:
---

+1 for the patch.

 TestNNStorageRetentionManager fails on Windows
 --

 Key: HDFS-4743
 URL: https://issues.apache.org/jira/browse/HDFS-4743
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4743.1.patch


 On Windows, this test fails on assertions about the expected purged files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4743) TestNNStorageRetentionManager fails on Windows

2013-04-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4743:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk. Thank you Chris!

 TestNNStorageRetentionManager fails on Windows
 --

 Key: HDFS-4743
 URL: https://issues.apache.org/jira/browse/HDFS-4743
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HDFS-4743.1.patch


 On Windows, this test fails on assertions about the expected purged files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643690#comment-13643690
 ] 

Suresh Srinivas commented on HDFS-4750:
---

bq. But, if it's preferred to do the change in a different branch, please let 
me know.
Since these do changes do not lend trunk unstable, I am okay not having a 
branch for this development. If I do not hear a differing opinion, I will start 
reviewing and merging this patch from next week.

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4740) Fixes for a few test failures on Windows

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643691#comment-13643691
 ] 

Suresh Srinivas commented on HDFS-4740:
---

+1 for the patch.

 Fixes for a few test failures on Windows
 

 Key: HDFS-4740
 URL: https://issues.apache.org/jira/browse/HDFS-4740
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4740.002.patch, HDFS-4740.003.patch, HDFS-4740.patch


 This issue is to track the following Windows test failures:
 # TestWebHdfsUrl#testSecureAuthParamsInUrl fails with timeout. The 4 second 
 timeout is too low.
 # TestDFSUtil#testGetNNUris depends on the 127.0.0.1-localhost reverse 
 lookup which does not happen on Windows.
 # TestLargeBlock#testLargeBlockSize fails with timeout. This test takes a 
 rather long time to complete on Windows. Part of the problem may be that we 
 are using small VMs for Windows testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4743) TestNNStorageRetentionManager fails on Windows

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643693#comment-13643693
 ] 

Hudson commented on HDFS-4743:
--

Integrated in Hadoop-trunk-Commit #3679 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3679/])
HDFS-4743. TestNNStorageRetentionManager fails on Windows. Contributed by 
Chris Nauroth. (Revision 1476591)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476591
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNStorageRetentionManager.java


 TestNNStorageRetentionManager fails on Windows
 --

 Key: HDFS-4743
 URL: https://issues.apache.org/jira/browse/HDFS-4743
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 3.0.0

 Attachments: HDFS-4743.1.patch


 On Windows, this test fails on assertions about the expected purged files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4740) Fixes for a few test failures on Windows

2013-04-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4740:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I committed the patch to trunk.

Thank you Arpit! Thanks to Chris for the review.

 Fixes for a few test failures on Windows
 

 Key: HDFS-4740
 URL: https://issues.apache.org/jira/browse/HDFS-4740
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4740.002.patch, HDFS-4740.003.patch, HDFS-4740.patch


 This issue is to track the following Windows test failures:
 # TestWebHdfsUrl#testSecureAuthParamsInUrl fails with timeout. The 4 second 
 timeout is too low.
 # TestDFSUtil#testGetNNUris depends on the 127.0.0.1-localhost reverse 
 lookup which does not happen on Windows.
 # TestLargeBlock#testLargeBlockSize fails with timeout. This test takes a 
 rather long time to complete on Windows. Part of the problem may be that we 
 are using small VMs for Windows testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4722) TestGetConf#testFederation times out on Windows

2013-04-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4722:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk.

Thank you Ivan! Thanks Daryn for the review.

 TestGetConf#testFederation times out on Windows
 ---

 Key: HDFS-4722
 URL: https://issues.apache.org/jira/browse/HDFS-4722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4722.patch


 Test times out on the below stack:
 {code}
 java.lang.Exception: test timed out after 1 milliseconds
   at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
   at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:849)
   at java.net.InetAddress.getAddressFromNameService(InetAddress.java:1202)
   at java.net.InetAddress.getAllByName0(InetAddress.java:1153)
   at java.net.InetAddress.getAllByName(InetAddress.java:1083)
   at java.net.InetAddress.getAllByName(InetAddress.java:1019)
   at java.net.InetAddress.getByName(InetAddress.java:969)
   at 
 org.apache.hadoop.security.SecurityUtil$StandardHostResolver.getByName(SecurityUtil.java:543)
   at 
 org.apache.hadoop.security.SecurityUtil.getByName(SecurityUtil.java:530)
   at 
 org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:232)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:160)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:149)
   at 
 org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:483)
   at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:466)
   at 
 org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddresses(DFSUtil.java:592)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:109)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:209)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testFederation(TestGetConf.java:313)
 {code} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4740) Fixes for a few test failures on Windows

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643697#comment-13643697
 ] 

Hudson commented on HDFS-4740:
--

Integrated in Hadoop-trunk-Commit #3680 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3680/])
HDFS-4740. Fixes for a few test failures on Windows. Contributed by Arpit 
Agarwal. (Revision 1476596)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476596
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLargeBlock.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsUrl.java


 Fixes for a few test failures on Windows
 

 Key: HDFS-4740
 URL: https://issues.apache.org/jira/browse/HDFS-4740
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4740.002.patch, HDFS-4740.003.patch, HDFS-4740.patch


 This issue is to track the following Windows test failures:
 # TestWebHdfsUrl#testSecureAuthParamsInUrl fails with timeout. The 4 second 
 timeout is too low.
 # TestDFSUtil#testGetNNUris depends on the 127.0.0.1-localhost reverse 
 lookup which does not happen on Windows.
 # TestLargeBlock#testLargeBlockSize fails with timeout. This test takes a 
 rather long time to complete on Windows. Part of the problem may be that we 
 are using small VMs for Windows testing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4722) TestGetConf#testFederation times out on Windows

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643698#comment-13643698
 ] 

Hudson commented on HDFS-4722:
--

Integrated in Hadoop-trunk-Commit #3681 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3681/])
HDFS-4722. TestGetConf#testFederation times out on Windows. Contributed by 
Ivan Mitic. (Revision 1476597)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476597
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java


 TestGetConf#testFederation times out on Windows
 ---

 Key: HDFS-4722
 URL: https://issues.apache.org/jira/browse/HDFS-4722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4722.patch


 Test times out on the below stack:
 {code}
 java.lang.Exception: test timed out after 1 milliseconds
   at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
   at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:849)
   at java.net.InetAddress.getAddressFromNameService(InetAddress.java:1202)
   at java.net.InetAddress.getAllByName0(InetAddress.java:1153)
   at java.net.InetAddress.getAllByName(InetAddress.java:1083)
   at java.net.InetAddress.getAllByName(InetAddress.java:1019)
   at java.net.InetAddress.getByName(InetAddress.java:969)
   at 
 org.apache.hadoop.security.SecurityUtil$StandardHostResolver.getByName(SecurityUtil.java:543)
   at 
 org.apache.hadoop.security.SecurityUtil.getByName(SecurityUtil.java:530)
   at 
 org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:232)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:160)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:149)
   at 
 org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:483)
   at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:466)
   at 
 org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddresses(DFSUtil.java:592)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:109)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:209)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testFederation(TestGetConf.java:313)
 {code} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4734) Tests that use ShellCommandFencer are broken on Windows

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643699#comment-13643699
 ] 

Suresh Srinivas commented on HDFS-4734:
---

+1 for the patch.

 Tests that use ShellCommandFencer are broken on Windows
 ---

 Key: HDFS-4734
 URL: https://issues.apache.org/jira/browse/HDFS-4734
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4734.001.patch


 The following tests use the POSIX true/false commands which are not available 
 on Windows.
 # TestDFSHAAdmin
 # TestDFSHAAdminMiniCluster
 # TestNodeFencer
 Additionally, ShellCommandFencer has a hard-coded dependency on bash (also 
 documented at 
 https://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4734) Tests that use ShellCommandFencer are broken on Windows

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643700#comment-13643700
 ] 

Suresh Srinivas commented on HDFS-4734:
---

BTW it probably is a good idea to split this into separate common and HDFS 
patch. can you please create a common patch and mark this jira as dependent on 
it?

 Tests that use ShellCommandFencer are broken on Windows
 ---

 Key: HDFS-4734
 URL: https://issues.apache.org/jira/browse/HDFS-4734
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4734.001.patch


 The following tests use the POSIX true/false commands which are not available 
 on Windows.
 # TestDFSHAAdmin
 # TestDFSHAAdminMiniCluster
 # TestNodeFencer
 Additionally, ShellCommandFencer has a hard-coded dependency on bash (also 
 documented at 
 https://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4748) MiniJournalCluster#restartJournalNode leaks resources, which causes sporadic test failures

2013-04-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643707#comment-13643707
 ] 

Chris Nauroth commented on HDFS-4748:
-

Thanks very much, Todd and Suresh!

 MiniJournalCluster#restartJournalNode leaks resources, which causes sporadic 
 test failures
 --

 Key: HDFS-4748
 URL: https://issues.apache.org/jira/browse/HDFS-4748
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: qjm, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.0.5-beta

 Attachments: HDFS-4748.1.patch


 {{MiniJournalCluster#restartJournalNode}} stops a {{JournalNode}} and then 
 recreates a new one with the same configuration.  However, it does not 
 maintain a reference to the new {{JournalNode}} instance, so therefore it 
 doesn't get stopped inside {{MiniJournalCluster#shutdown}}.  The 
 {{JournalNode}} holds a file lock on its underlying storage, so this can 
 cause sporadic failures in tests like {{TestQuorumJournalManager}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643712#comment-13643712
 ] 

Chris Nauroth commented on HDFS-4610:
-

{quote}
My main goal was to add the missing functionality for Windows and set us up for 
the better cross platform support.
{quote}

Thanks for clarifying the scope of this patch.  I just wanted to make sure we 
didn't have a situation where a fix was working on some machines but not others.

+1 for the patch, dependent on HADOOP-9413 getting committed first and then 
getting a successful Jenkins run here.


 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4705) Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643726#comment-13643726
 ] 

Suresh Srinivas commented on HDFS-4705:
---

+1 for the patch. I will commit it soon.

 Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir
 --

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HDFS-4705.1.patch, HDFS-4705.2.patch


 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4705) Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir

2013-04-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4705:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to trunk.

Thank you Ivan. Thank you Chris for the reviews and valuable comments.

 Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir
 --

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-4705.1.patch, HDFS-4705.2.patch


 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4705) Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir

2013-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643729#comment-13643729
 ] 

Hudson commented on HDFS-4705:
--

Integrated in Hadoop-trunk-Commit #3684 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3684/])
HDFS-4705. Address HDFS test failures on Windows because of invalid 
dfs.namenode.name.dir. Contributed by Ivan Mitic. (Revision 1476610)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1476610
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAllowFormat.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNThroughputBenchmark.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameEditsConfigs.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestValidateConfigurationSettings.java


 Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir
 --

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Fix For: 3.0.0

 Attachments: HDFS-4705.1.patch, HDFS-4705.2.patch


 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-27 Thread Hari Mankude (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643734#comment-13643734
 ] 

Hari Mankude commented on HDFS-4750:


I would recommend thinking through NFS write operations. The client does 
caching and page cache can result in lots of weirdness. For example, as long as 
the data is cached in client's page cache, client can do random writes and 
overwrites. When page cache is flushed to hdfs data store, some writes would 
fail (translate to overwrites in hdfs) while others might succeed (offsets 
happen to be append). 

An alternative to consider to support NFS writes is to require clients do NFS 
mounts with directio enabled. Directio will bypass client cache and might 
alleviate some of the funky behavior.



 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-27 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643737#comment-13643737
 ] 

Todd Lipcon commented on HDFS-4750:
---

Looking at some of the patches that have been posted, it appears that this 
project is entirely new/separate code from the rest of Hadoop. What is the 
purpose of putting it in Hadoop proper rather than proposing it as a separate 
project (eg in the incubator)? Bundling it with Hadoop has the downside that it 
makes our releases even bigger, whereas the general feeling of late has been 
that we should try to keep things out of 'core' (eg we removed a bunch of 
former contrib projects)

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643745#comment-13643745
 ] 

Suresh Srinivas commented on HDFS-4489:
---

I ran Slive tests. Even with very small size data written, I could not find 
perceptible difference between the test runs given any additional time in NN 
methods is dwarfed by the overall time of calling NN over RPC etc.

So I decided to run NNThroughputBenchmark. For folks new to it, it is a micro 
benchmark that does not use RPC and directly executes operations on the 
namenode class. Hence it gives comparisons sharply limited to NN method calls 
alone. I ran NNThroughputBenchmark command run to create 100K files using 100 
threads in each iteration, using the command below:
{noformat}
bin/hadoop jar share/hadoop/hdfs/hadoop-hdfs-2.0.5-SNAPSHOT-tests.jar 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -op create 
-threads 100 -files 10 -filesPerDir 100 
{noformat}

*Without this patch:*
||Opertaions||Elapsed||OpsPerSec||AvgTime||
|10| 20327| 4919.565110444237| 20|
|10| 19199| 5208.604614823688| 19|
|10| 19287| 5184.839529216571| 19|
|10| 19128| 5227.9381012128815| 19|
|10| 19082| 5240.540823813018| 19|
|10| 18785| 5323.396326856535| 18|
|10| 18947| 5277.880403230063| 18|
|10| 18963| 5273.427200337499| 18|
|10| 19206| 5206.706237634073| 19|
|10| 19434| 5145.621076463929| 19|
|Average|19235.8|5200.851942|18.8|

*With this patch:*
||Opertaions||Elapsed||OpsPerSec||AvgTime||
|10| 20104| 4974.134500596896| 19|
|10| 19498| 5128.731151913017| 19|
|10| 19449| 5141.652527122217| 19|
|10| 19530| 5120.327700972863| 19|
|10| 20067| 4983.305925150745| 19|
|10| 19703| 5075.369233111709| 19|
|10| 19595| 5103.342689461598| 19|
|10| 19418| 5149.860953754249| 19|
|10| 19932| 5017.057997190447| 19|
|10| 20596| 4855.311711011847| 20|
|Average|19789.2|5054.909439|19.1|

*With this patch + an additional change to turn off INodeMap:*
||Opertaions||Elapsed||OpsPerSec||AvgTime||
|10| 19615| 5098.139179199592| 19|
|10| 19349| 5168.225748100677| 19|
|10| 19136| 5225.752508361204| 19|
|10| 19347| 5168.760014472528| 19|
|10| 20096| 4976.114649681529| 19|
|10| 19248| 5195.344970906068| 19|
|10| 18916| 5286.529921759357| 18|
|10| 19217| 5203.7258677212885| 19|
|10| 20105| 4973.887092762994| 20|
|10| 19882| 5029.675082989639| 19|
|Average|19491.1|5132.615504|19|


 Use InodeID as as an identifier of a file in HDFS protocols and APIs
 

 Key: HDFS-4489
 URL: https://issues.apache.org/jira/browse/HDFS-4489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.0.5-beta


 The benefit of using InodeID to uniquely identify a file can be multiple 
 folds. Here are a few of them:
 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
 HDFS-4437.
 2. modification checks in tools like distcp. Since a file could have been 
 replaced or renamed to, the file name and size combination is no t reliable, 
 but the combination of file id and size is unique.
 3. id based protocol support (e.g., NFS)
 4. to make the pluggable block placement policy use fileid instead of 
 filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643757#comment-13643757
 ] 

Allen Wittenauer commented on HDFS-4750:


Have we run any of these against SPEC SFS?  What does iozone do with this?  Any 
clients besides Linux and Mac OS X? (FWIW: OS X's NFS client has always been a 
bit flaky...)  Have we thought about YANFS support?

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643758#comment-13643758
 ] 

Andrew Purtell commented on HDFS-4750:
--

bq. What does iozone do with this?

This is a great question. Or fio, or another of the usual. 

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-27 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643772#comment-13643772
 ] 

Brandon Li commented on HDFS-4750:
--

@Allen,Andrew
{quote}This is a great question. Or fio, or another of the usual.
{quote}
The code is in its very early stage. We have done little performance test. We 
did some tests with Cthon04 and NFStest(from NetApp). We will do some 
performance tests once the code is relatively stable.

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4750) Support NFSv3 interface to HDFS

2013-04-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643816#comment-13643816
 ] 

Andrew Purtell commented on HDFS-4750:
--

bq. We will do some performance tests once the code is relatively stable.

Would be happy to help with that when you think the code is ready. 

 Support NFSv3 interface to HDFS
 ---

 Key: HDFS-4750
 URL: https://issues.apache.org/jira/browse/HDFS-4750
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HADOOP-NFS-Proposal.pdf


 Access HDFS is usually done through HDFS Client or webHDFS. Lack of seamless 
 integration with client’s file system makes it difficult for users and 
 impossible for some applications to access HDFS. NFS interface support is one 
 way for HDFS to have such easy integration.
 This JIRA is to track the NFS protocol support for accessing HDFS. With HDFS 
 client, webHDFS and the NFS interface, HDFS will be easier to access and be 
 able support more applications and use cases. 
 We will upload the design document and the initial implementation. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4578) Restrict snapshot IDs to 24-bits wide

2013-04-27 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643818#comment-13643818
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-4578:
--

Patch looks good.  Some nits
- Use upper case letters for snapshotIdBitWidth since it is static.
- Change getMaxSnapshotID() to a constant MAX_SNAPSHOT_ID.

 Restrict snapshot IDs to 24-bits wide
 -

 Key: HDFS-4578
 URL: https://issues.apache.org/jira/browse/HDFS-4578
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: Snapshot (HDFS-2802)
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: Snapshot (HDFS-2802)

 Attachments: HDFS-4578.patch, HDFS-4578.patch, HDFS-4578.patch


 Snapshot IDs will be restricted to 24-bits. This will allow at the most 
 ~16Million snapshots globally.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-27 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643839#comment-13643839
 ] 

Suresh Srinivas commented on HDFS-4489:
---

I made changes to the code to reuse the byte[][] pathComponents for file 
creation (made some optimizations in that method. There are other optimizations 
available in terms of permission checks that I did not venture to do). The 
throughput with those partial optimizations is:
||Opertaions||Elapsed||OpsPerSec||AvgTime||
|10| 19591| 5104.384666428462| 19|
|10| 18969| 5271.759186040382| 18|
|10| 19206| 5206.706237634073| 19|
|10| 18652| 5361.35535063264| 18|
|10| 19218| 5203.455094182537| 19|
|10| 19179| 5214.036185411127| 19|
|10| 19302| 5180.810278727593| 19|
|10| 19388| 5157.829585310501| 19|
|10| 19099| 5235.876223886067| 19|
|10| 19591| 5104.384666428462| 19|
|Average|19219.5|5204.059747|18.8|


 Use InodeID as as an identifier of a file in HDFS protocols and APIs
 

 Key: HDFS-4489
 URL: https://issues.apache.org/jira/browse/HDFS-4489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.0.5-beta


 The benefit of using InodeID to uniquely identify a file can be multiple 
 folds. Here are a few of them:
 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
 HDFS-4437.
 2. modification checks in tools like distcp. Since a file could have been 
 replaced or renamed to, the file name and size combination is no t reliable, 
 but the combination of file id and size is unique.
 3. id based protocol support (e.g., NFS)
 4. to make the pluggable block placement policy use fileid instead of 
 filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2013-04-27 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-4489:
--

Attachment: 4434.optimized.patch

Attaching patch to given an idea on how to reuse path components added in 
HDFS-4434.

 Use InodeID as as an identifier of a file in HDFS protocols and APIs
 

 Key: HDFS-4489
 URL: https://issues.apache.org/jira/browse/HDFS-4489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Brandon Li
Assignee: Brandon Li
 Fix For: 2.0.5-beta

 Attachments: 4434.optimized.patch


 The benefit of using InodeID to uniquely identify a file can be multiple 
 folds. Here are a few of them:
 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
 HDFS-4437.
 2. modification checks in tools like distcp. Since a file could have been 
 replaced or renamed to, the file name and size combination is no t reliable, 
 but the combination of file id and size is unique.
 3. id based protocol support (e.g., NFS)
 4. to make the pluggable block placement policy use fileid instead of 
 filename (HDFS-385).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4760) Update inodeMap after node replacement

2013-04-27 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4760:


Attachment: HDFS-4760.003.patch

Thanks for the comments, Nicholas! Update the patch accordingly.

 Update inodeMap after node replacement
 --

 Key: HDFS-4760
 URL: https://issues.apache.org/jira/browse/HDFS-4760
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-4760.001.patch, HDFS-4760.002.patch, 
 HDFS-4760.003.patch


 Similar with HDFS-4757, we need to update the inodeMap after node 
 replacement. Because a lot of node replacement happens in the snapshot branch 
 (e.g., INodeDirectory = INodeDirectoryWithSnapshot, INodeDirectory = 
 INodeDirectorySnapshottable, INodeFile = INodeFileWithSnapshot ...), this 
 becomes a non-trivial issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4770) need log out an extra info in DFSOutputstream

2013-04-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-4770:
-

Tags:   (was: need log out an extra info in DFSOutputstream)
  Resolution: Won't Fix
Release Note:   (was: need log out an extra info in DFSOutputstream)
  Status: Resolved  (was: Patch Available)

This patch actually makes logging worse by replacing the useful information 
{{block}} with {{testing testing}}. 

 need log out an extra info in DFSOutputstream
 -

 Key: HDFS-4770
 URL: https://issues.apache.org/jira/browse/HDFS-4770
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.2-alpha
 Environment: need log out an extra info in DFSOutputstream
Reporter: Keyao Jin
Priority: Minor
  Labels: DFSOutputstream, an, extra, in, info, log, need, out
 Fix For: 2.0.2-alpha

 Attachments: HDFS.patch

   Original Estimate: 240h
  Remaining Estimate: 240h

 need log out an extra info in DFSOutputstream

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira