[jira] Commented: (HDFS-94) The Heap Size in HDFS web ui may not be accurate

2009-12-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-94?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12794609#action_12794609
 ] 

Hudson commented on HDFS-94:


Integrated in Hadoop-Hdfs-trunk #182 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/182/])


 The Heap Size in HDFS web ui may not be accurate
 --

 Key: HDFS-94
 URL: https://issues.apache.org/jira/browse/HDFS-94
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Dmytro Molkov
 Fix For: 0.22.0

 Attachments: HDFS-94.patch


 It seems that the Heap Size shown in HDFS web UI is not accurate.  It keeps 
 showing 100% of usage.  e.g.
 {noformat}
 Heap Size is 10.01 GB / 10.01 GB (100%) 
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-564) Adding pipeline test 17-35

2009-12-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12794612#action_12794612
 ] 

Hudson commented on HDFS-564:
-

Integrated in Hadoop-Hdfs-trunk #182 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/182/])


 Adding pipeline test 17-35
 --

 Key: HDFS-564
 URL: https://issues.apache.org/jira/browse/HDFS-564
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.21.0
Reporter: Kan Zhang
Assignee: Hairong Kuang
Priority: Blocker
 Fix For: 0.21.0, 0.22.0

 Attachments: h564-24.patch, h564-25.patch, pipelineTests.patch, 
 pipelineTests1.patch, pipelineTests2.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-101) DFS write pipeline : DFSClient sometimes does not detect second datanode failure

2009-12-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12794610#action_12794610
 ] 

Hudson commented on HDFS-101:
-

Integrated in Hadoop-Hdfs-trunk #182 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/182/])


 DFS write pipeline : DFSClient sometimes does not detect second datanode 
 failure 
 -

 Key: HDFS-101
 URL: https://issues.apache.org/jira/browse/HDFS-101
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.20.1
Reporter: Raghu Angadi
Assignee: Hairong Kuang
Priority: Blocker
 Fix For: 0.20.2, 0.21.0, 0.22.0

 Attachments: detectDownDN-0.20.patch, detectDownDN1-0.20.patch, 
 detectDownDN2.patch, detectDownDN3-0.20.patch, detectDownDN3.patch, 
 hdfs-101.tar.gz


 When the first datanode's write to second datanode fails or times out 
 DFSClient ends up marking first datanode as the bad one and removes it from 
 the pipeline. Similar problem exists on DataNode as well and it is fixed in 
 HADOOP-3339. From HADOOP-3339 : 
 The main issue is that BlockReceiver thread (and DataStreamer in the case of 
 DFSClient) interrupt() the 'responder' thread. But interrupting is a pretty 
 coarse control. We don't know what state the responder is in and interrupting 
 has different effects depending on responder state. To fix this properly we 
 need to redesign how we handle these interactions.
 When the first datanode closes its socket from DFSClient, DFSClient should 
 properly read all the data left in the socket.. Also, DataNode's closing of 
 the socket should not result in a TCP reset, otherwise I think DFSClient will 
 not be able to read from the socket.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12794611#action_12794611
 ] 

Hudson commented on HDFS-630:
-

Integrated in Hadoop-Hdfs-trunk #182 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/182/])


 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: Cosmin Lehene
 Attachments: 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-762) Trying to start the balancer throws a NPE

2009-12-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12794608#action_12794608
 ] 

Hudson commented on HDFS-762:
-

Integrated in Hadoop-Hdfs-trunk #182 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/182/])


 Trying to start the balancer throws a NPE
 -

 Key: HDFS-762
 URL: https://issues.apache.org/jira/browse/HDFS-762
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 0.21.0
Reporter: Cristian Ivascu
Assignee: Cristian Ivascu
 Fix For: 0.21.0

 Attachments: 0001-corrected-balancer-constructor.patch, HDFS-762.patch


 When trying to run the balancer, I get a NullPointerException:
 2009-11-10 11:08:14,235 ERROR 
 org.apache.hadoop.hdfs.server.balancer.Balancer: 
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy.getInstance(BlockPlacementPolicy.java:161)
 at 
 org.apache.hadoop.hdfs.server.balancer.Balancer.checkReplicationPolicyCompatibility(Balancer.java:784)
 at 
 org.apache.hadoop.hdfs.server.balancer.Balancer.init(Balancer.java:792)
 at 
 org.apache.hadoop.hdfs.server.balancer.Balancer.main(Balancer.java:814)
 This happens when trying to use bin/start-balancer or bin/hdfs balancer 
 -threshold 10
 The config files (hdfs-site and core-site) have as fs.default.name 
 hdfs://namenode:9000.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-814) Add an api to get the visible length of a DFSDataInputStream.

2009-12-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12794614#action_12794614
 ] 

Hudson commented on HDFS-814:
-

Integrated in Hadoop-Hdfs-trunk #182 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/182/])


 Add an api to get the visible length of a DFSDataInputStream.
 -

 Key: HDFS-814
 URL: https://issues.apache.org/jira/browse/HDFS-814
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs client
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
 Fix For: 0.21.0, 0.22.0

 Attachments: h814_20091221.patch, h814_20091221_0.21.patch


 Hflush guarantees that the bytes written before are visible to the new 
 readers.  However, there is no way to get the length of the visible bytes.  
 The visible length is useful in some applications like SequenceFile.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-775) FSDataset calls getCapacity() twice -bug?

2009-12-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12794615#action_12794615
 ] 

Hudson commented on HDFS-775:
-

Integrated in Hadoop-Hdfs-trunk #182 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/182/])


 FSDataset calls getCapacity() twice -bug?
 -

 Key: HDFS-775
 URL: https://issues.apache.org/jira/browse/HDFS-775
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: data-node
Affects Versions: 0.22.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Minor
 Fix For: 0.22.0

 Attachments: HDFS-775-1.patch, HDFS-775-2.patch


 I'm not sure this is a bug or as intended, but I thought I'd mention it.
 FSDataset.getCapacity() calls DF.getCapacity() twice, when evaluating its 
 capacity. Although there is caching to stop the shell being exec'd twice in a 
 row, there is a risk that the first call doesn't run the shell, and the 
 second does -so the value changes during the method. 
 If that is not intended, it is better to cache the first value for the whole 
 method

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-849) TestFiDataTransferProtocol2#pipeline_Fi_18 sometimes fails

2009-12-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12794613#action_12794613
 ] 

Hudson commented on HDFS-849:
-

Integrated in Hadoop-Hdfs-trunk #182 (See 
[http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/182/])


 TestFiDataTransferProtocol2#pipeline_Fi_18 sometimes fails
 --

 Key: HDFS-849
 URL: https://issues.apache.org/jira/browse/HDFS-849
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 0.20.1
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.21.0, 0.22.0

 Attachments: countDown.patch


 .TestFiDataTransferProtocol2#pipeline_Fi_18 sometimes fails with the 
 following error:
 junit.framework.AssertionFailedError: 
   at 
 org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.runTest17_19(TestFiDataTransferProtocol2.java:139)
   at 
 org.apache.hadoop.hdfs.server.datanode.TestFiDataTransferProtocol2.pipeline_Fi_18(TestFiDataTransferProtocol2.java:186)
 Which means that the test did not trigger pipeline recovery. The test log 
 shows that there is no fault injected to the pipeline. It turns out there is 
 a bug in the test code. Counting down 3 means inject a fault when receiving 
 the fourth packet. But the code allows the file to have only 3 packets. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread Cosmin Lehene (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cosmin Lehene updated HDFS-630:
---

Attachment: 0001-Fix-HDFS-630-0.21-svn-1.patch
0001-Fix-HDFS-630-trunk-svn-3.patch

New patches for 0.21 and trunk. ClientProtcol versionID is 53L for 0.21 54L for 
trunk. 

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: Cosmin Lehene
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HDFS-630:
---

Status: In Progress  (was: Patch Available)

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: Cosmin Lehene
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HDFS-630:
--

Assignee: stack  (was: Cosmin Lehene)

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: stack
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HDFS-630:
---

Status: Patch Available  (was: In Progress)

Trunk v3 applies for me (with some small slop).  Submitting to hudson.

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: stack
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HDFS-630:
--

Assignee: Cosmin Lehene  (was: stack)

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: Cosmin Lehene
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12794651#action_12794651
 ] 

Hadoop QA commented on HDFS-630:


-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12428982/0001-Fix-HDFS-630-0.21-svn-1.patch
  against trunk revision 893650.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 13 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/161/console

This message is automatically generated.

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: Cosmin Lehene
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HDFS-630:
--

Assignee: stack  (was: Cosmin Lehene)

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: stack
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 0001-Fix-HDFS-630-trunk-svn-3.patch, HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HDFS-630:
---

Attachment: 0001-Fix-HDFS-630-trunk-svn-3.patch

Re-attach v3 trunk patch so it becomes last patch uploaded so hudson picks it 
up instead of the 0.21 version.

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: Cosmin Lehene
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 0001-Fix-HDFS-630-trunk-svn-3.patch, HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HDFS-630:
---

Status: In Progress  (was: Patch Available)

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: Cosmin Lehene
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 0001-Fix-HDFS-630-trunk-svn-3.patch, HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HDFS-630:
---

Status: Patch Available  (was: In Progress)

Try hudson again.  Hopefully it picks up the trunk patch this time.

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: stack
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 0001-Fix-HDFS-630-trunk-svn-3.patch, HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HDFS-630:
--

Assignee: Cosmin Lehene  (was: stack)

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: Cosmin Lehene
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 0001-Fix-HDFS-630-trunk-svn-3.patch, HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HDFS-630) In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific datanodes when locating the next block.

2009-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12794657#action_12794657
 ] 

Hadoop QA commented on HDFS-630:


-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12428986/0001-Fix-HDFS-630-trunk-svn-3.patch
  against trunk revision 893650.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 13 new or modified tests.

-1 javadoc.  The javadoc tool appears to have generated 1 warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed core unit tests.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/162/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/162/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/162/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/162/console

This message is automatically generated.

 In DFSOutputStream.nextBlockOutputStream(), the client can exclude specific 
 datanodes when locating the next block.
 ---

 Key: HDFS-630
 URL: https://issues.apache.org/jira/browse/HDFS-630
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs client, name-node
Affects Versions: 0.21.0
Reporter: Ruyue Ma
Assignee: Cosmin Lehene
 Attachments: 0001-Fix-HDFS-630-0.21-svn-1.patch, 
 0001-Fix-HDFS-630-0.21-svn.patch, 
 0001-Fix-HDFS-630-for-0.21-and-trunk-unified.patch, 
 0001-Fix-HDFS-630-for-0.21.patch, 0001-Fix-HDFS-630-svn.patch, 
 0001-Fix-HDFS-630-svn.patch, 0001-Fix-HDFS-630-trunk-svn-1.patch, 
 0001-Fix-HDFS-630-trunk-svn-2.patch, 0001-Fix-HDFS-630-trunk-svn-3.patch, 
 0001-Fix-HDFS-630-trunk-svn-3.patch, HDFS-630.patch


 created from hdfs-200.
 If during a write, the dfsclient sees that a block replica location for a 
 newly allocated block is not-connectable, it re-requests the NN to get a 
 fresh set of replica locations of the block. It tries this 
 dfs.client.block.write.retries times (default 3), sleeping 6 seconds between 
 each retry ( see DFSClient.nextBlockOutputStream).
 This setting works well when you have a reasonable size cluster; if u have 
 few datanodes in the cluster, every retry maybe pick the dead-datanode and 
 the above logic bails out.
 Our solution: when getting block location from namenode, we give nn the 
 excluded datanodes. The list of dead datanodes is only for one block 
 allocation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.