[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2018-05-20 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482020#comment-16482020
 ] 

Wei-Chiu Chuang commented on HDFS-8884:
---

If I understand the patch correctly, this jira considers decommissioning nodes 
when placing blocks. Therefore HDFS-5114 and HDFS-4861 are obsolete.

> Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
> ---
>
> Key: HDFS-8884
> URL: https://issues.apache.org/jira/browse/HDFS-8884
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Major
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch
>
>
> In current BlockPlacementPolicyDefault, when choosing datanode storage to 
> place block, we have following logic:
> {code}
> final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
> chosenNode.getStorageInfos());
> int i = 0;
> boolean search = true;
> for (Iterator> iter = storageTypes
> .entrySet().iterator(); search && iter.hasNext(); ) {
>   Map.Entry entry = iter.next();
>   for (i = 0; i < storages.length; i++) {
> StorageType type = entry.getKey();
> final int newExcludedNodes = addIfIsGoodTarget(storages[i],
> {code}
> We will iterate (actually two {{for}}, although they are usually small value) 
> all storages of the candidate datanode even the datanode itself is not good 
> (e.g. decommissioned, stale, too busy..), since currently we do all the check 
> in {{addIfIsGoodTarget}}.
> We can fail-fast: check the datanode related conditions first, if the 
> datanode is not good, then no need to shuffle and iterate the storages. Then 
> it's more efficient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707446#comment-14707446
 ] 

Hudson commented on HDFS-8884:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #294 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/294/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707352#comment-14707352
 ] 

Hudson commented on HDFS-8884:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1024 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1024/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707592#comment-14707592
 ] 

Hudson commented on HDFS-8884:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #283 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/283/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14707608#comment-14707608
 ] 

Hudson commented on HDFS-8884:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2221 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2221/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-20 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704586#comment-14704586
 ] 

Vinayakumar B commented on HDFS-8884:
-

+1 for Latest patch

 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-20 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704747#comment-14704747
 ] 

Yi Liu commented on HDFS-8884:
--

Thanks [~vinayrpet] for the review! Committed to trunk and branch-2.

 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704912#comment-14704912
 ] 

Hudson commented on HDFS-8884:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8326 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8326/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704955#comment-14704955
 ] 

Hudson commented on HDFS-8884:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2239 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2239/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704906#comment-14704906
 ] 

Hudson commented on HDFS-8884:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #290 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/290/])
HDFS-8884. Fail-fast check in BlockPlacementPolicyDefault#chooseTarget. (yliu) 
(yliu: rev 80a29906bcd718bbba223fa099e523281d9f3369)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDefaultBlockPlacementPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.8.0

 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-19 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702918#comment-14702918
 ] 

Yi Liu commented on HDFS-8884:
--

The two failures are not related.

 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702839#comment-14702839
 ] 

Hadoop QA commented on HDFS-8884:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m  1s | Findbugs (version ) appears to 
be broken on trunk. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   8m  7s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  8s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 34s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 34s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 16s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 175m 21s | Tests failed in hadoop-hdfs. |
| | | 218m 37s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestAppendSnapshotTruncate |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
| Timed out tests | org.apache.hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751189/HDFS-8884.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 22dc5fc |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12040/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12040/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12040/console |


This message was automatically generated.

 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-18 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14701127#comment-14701127
 ] 

Vinayakumar B commented on HDFS-8884:
-

Patch looks overall good. Thanks [~hitliuyi]

1. There are some nits, as per checkstyle, needs to be cleaned.

2. In test, {{testPlacementWithLocalRackNodesDecommissioned}} doesn't ensure 
that {{dnd3}} belongs to client's rack. Add a check before verifying placement.

+1 once addressed.


 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8884.001.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14702419#comment-14702419
 ] 

Hadoop QA commented on HDFS-8884:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  13m 44s | Pre-patch trunk JavaDoc 
compilation may be broken. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:red}-1{color} | javac |   0m 11s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12751189/HDFS-8884.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7ecbfd4 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12038/console |


This message was automatically generated.

 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget

2015-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681818#comment-14681818
 ] 

Hadoop QA commented on HDFS-8884:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 10s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 21s | The applied patch generated  4 
new checkstyle issues (total was 58, now 56). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 7  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 22s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m  3s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 175m 17s | Tests failed in hadoop-hdfs. |
| | | 218m 58s | |
\\
\\
|| Reason || Tests ||
| Timed out tests | org.apache.hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12749787/HDFS-8884.001.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / fa1d84a |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11961/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11961/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11961/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11961/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11961/console |


This message was automatically generated.

 Fail-fast check in BlockPlacementPolicyDefault#chooseTarget
 ---

 Key: HDFS-8884
 URL: https://issues.apache.org/jira/browse/HDFS-8884
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HDFS-8884.001.patch


 In current BlockPlacementPolicyDefault, when choosing datanode storage to 
 place block, we have following logic:
 {code}
 final DatanodeStorageInfo[] storages = DFSUtil.shuffle(
 chosenNode.getStorageInfos());
 int i = 0;
 boolean search = true;
 for (IteratorMap.EntryStorageType, Integer iter = storageTypes
 .entrySet().iterator(); search  iter.hasNext(); ) {
   Map.EntryStorageType, Integer entry = iter.next();
   for (i = 0; i  storages.length; i++) {
 StorageType type = entry.getKey();
 final int newExcludedNodes = addIfIsGoodTarget(storages[i],
 {code}
 We will iterate (actually two {{for}}, although they are usually small value) 
 all storages of the candidate datanode even the datanode itself is not good 
 (e.g. decommissioned, stale, too busy..), since currently we do all the check 
 in {{addIfIsGoodTarget}}.
 We can fail-fast: check the datanode related conditions first, if the 
 datanode is not good, then no need to shuffle and iterate the storages. Then 
 it's more efficient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)