[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2016-08-24 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435895#comment-15435895
 ] 

Ravi Prakash commented on HDFS-9205:


Thanks for the change Nicholas! Should this line be modified? 
https://github.com/apache/hadoop/blob/a1f3293762dddb0ca953d1145f5b53d9086b25b8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/LowRedundancyBlocks.java#L62
 .

I think most often this queue had missing blocks, so it didn't really make 
sense to re-replicate missing blocks anyway. We should be careful about 
removing this queue though, because its where the [count of missing blocks is 
taken 
from|https://github.com/apache/hadoop/blob/a1f3293762dddb0ca953d1145f5b53d9086b25b8/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L4112]

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961048#comment-14961048
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #542 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/542/])
Revert "Move HDFS-9205 to trunk in CHANGES.txt." (szetszwo: rev 
a554701fe4402ae30461e2ef165cb60970a202a0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961226#comment-14961226
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2443 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2443/])
Revert "Move HDFS-9205 to trunk in CHANGES.txt." (szetszwo: rev 
a554701fe4402ae30461e2ef165cb60970a202a0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961086#comment-14961086
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2491 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2491/])
Revert "Move HDFS-9205 to trunk in CHANGES.txt." (szetszwo: rev 
a554701fe4402ae30461e2ef165cb60970a202a0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961316#comment-14961316
 ] 

Hudson commented on HDFS-9205:
--

ABORTED: Integrated in Hadoop-Hdfs-trunk-Java8 #506 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/506/])
Revert "Move HDFS-9205 to trunk in CHANGES.txt." (szetszwo: rev 
a554701fe4402ae30461e2ef165cb60970a202a0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960870#comment-14960870
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8650 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8650/])
Revert "Move HDFS-9205 to trunk in CHANGES.txt." (szetszwo: rev 
a554701fe4402ae30461e2ef165cb60970a202a0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14960974#comment-14960974
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1278 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1278/])
Revert "Move HDFS-9205 to trunk in CHANGES.txt." (szetszwo: rev 
a554701fe4402ae30461e2ef165cb60970a202a0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14961036#comment-14961036
 ] 

Hudson commented on HDFS-9205:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #557 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/557/])
Revert "Move HDFS-9205 to trunk in CHANGES.txt." (szetszwo: rev 
a554701fe4402ae30461e2ef165cb60970a202a0)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958761#comment-14958761
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #535 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/535/])
HDFS-9205. Do not schedule corrupt blocks for replication.  (szetszwo) 
(szetszwo: rev 5411dc559d5f73e4153e76fdff94a26869c17a37)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
Move HDFS-9205 to trunk in CHANGES.txt. (szetszwo: rev 
a49298d585f2cbd3bb81579f6e5d0d7b69126264)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958786#comment-14958786
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #548 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/548/])
HDFS-9205. Do not schedule corrupt blocks for replication.  (szetszwo) 
(szetszwo: rev 5411dc559d5f73e4153e76fdff94a26869c17a37)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
Move HDFS-9205 to trunk in CHANGES.txt. (szetszwo: rev 
a49298d585f2cbd3bb81579f6e5d0d7b69126264)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-15 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958602#comment-14958602
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9205:
---

The failed tests are not related.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958668#comment-14958668
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8641 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8641/])
HDFS-9205. Do not schedule corrupt blocks for replication.  (szetszwo) 
(szetszwo: rev 5411dc559d5f73e4153e76fdff94a26869c17a37)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958750#comment-14958750
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1271 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1271/])
HDFS-9205. Do not schedule corrupt blocks for replication.  (szetszwo) 
(szetszwo: rev 5411dc559d5f73e4153e76fdff94a26869c17a37)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
Move HDFS-9205 to trunk in CHANGES.txt. (szetszwo: rev 
a49298d585f2cbd3bb81579f6e5d0d7b69126264)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958816#comment-14958816
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2484 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2484/])
HDFS-9205. Do not schedule corrupt blocks for replication.  (szetszwo) 
(szetszwo: rev 5411dc559d5f73e4153e76fdff94a26869c17a37)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
Move HDFS-9205 to trunk in CHANGES.txt. (szetszwo: rev 
a49298d585f2cbd3bb81579f6e5d0d7b69126264)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958697#comment-14958697
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8642 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8642/])
Move HDFS-9205 to trunk in CHANGES.txt. (szetszwo: rev 
a49298d585f2cbd3bb81579f6e5d0d7b69126264)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958452#comment-14958452
 ] 

Hadoop QA commented on HDFS-9205:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  20m 30s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 14s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 34s | The applied patch generated  7 
new checkstyle issues (total was 201, now 204). |
| {color:red}-1{color} | whitespace |   0m  2s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 40s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 25s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests |  65m 45s | Tests failed in hadoop-hdfs. |
| | | 116m 36s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.fs.TestGlobPaths |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766712/h9205_20151015.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / c80b3a8 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12998/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html
 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12998/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12998/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12998/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12998/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12998/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12998/console |


This message was automatically generated.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958924#comment-14958924
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2438 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2438/])
HDFS-9205. Do not schedule corrupt blocks for replication.  (szetszwo) 
(szetszwo: rev 5411dc559d5f73e4153e76fdff94a26869c17a37)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
Move HDFS-9205 to trunk in CHANGES.txt. (szetszwo: rev 
a49298d585f2cbd3bb81579f6e5d0d7b69126264)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14959192#comment-14959192
 ] 

Hudson commented on HDFS-9205:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #501 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/501/])
HDFS-9205. Do not schedule corrupt blocks for replication.  (szetszwo) 
(szetszwo: rev 5411dc559d5f73e4153e76fdff94a26869c17a37)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DecommissionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/NumberReplicas.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestUnderReplicatedBlockQueues.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
Move HDFS-9205 to trunk in CHANGES.txt. (szetszwo: rev 
a49298d585f2cbd3bb81579f6e5d0d7b69126264)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch, h9205_20151015.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954494#comment-14954494
 ] 

Hadoop QA commented on HDFS-9205:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 23s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m 30s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 22s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 25s | The applied patch generated  7 
new checkstyle issues (total was 202, now 205). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 34s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 12s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 187m  5s | Tests failed in hadoop-hdfs. |
| | | 234m  5s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12766253/h9205_20151013.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / c60a16f |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12946/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12946/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12946/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12946/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12946/console |


This message was automatically generated.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch, 
> h9205_20151013.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-12 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954147#comment-14954147
 ] 

Jing Zhao commented on HDFS-9205:
-

# Nit: need to fix the javadoc of {{UnderReplicatedBlocks}}, 
"getPriority(BlockInfo, int, int, int)" should be updated to 
"getPriority(BlockInfo, int, int, int, int)".
# Minor: since the iterator of the LightWeightLinkedSet already correctly 
throws NoSuchElementException when there is no next element, it may not be 
necessary to do the hasNext check.
{code}
  public BlockInfo next() {
if (!hasNext()) {
  throw new NoSuchElementException();
}
return b.next();
  }
{code}

+1 after addressing these.



> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952258#comment-14952258
 ] 

Hadoop QA commented on HDFS-9205:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 52s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 17s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 20s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 22s | The applied patch generated  8 
new checkstyle issues (total was 203, now 207). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 28s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 10s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 188m 20s | Tests failed in hadoop-hdfs. |
| | | 234m  3s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.web.TestWebHDFSOAuth2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765729/h9205_20151009b.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / db93047 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12925/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12925/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12925/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12925/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12925/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12925/console |


This message was automatically generated.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1495#comment-1495
 ] 

Hadoop QA commented on HDFS-9205:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  21m 17s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m 55s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 28s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 21s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 30s | The applied patch generated  8 
new checkstyle issues (total was 203, now 207). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 187m 57s | Tests failed in hadoop-hdfs. |
| | | 238m 27s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.web.TestWebHDFSOAuth2 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765722/h9205_20151009.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / e1bf8b3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12878/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12878/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12878/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12878/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12878/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12878/console |


This message was automatically generated.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14950026#comment-14950026
 ] 

Hadoop QA commented on HDFS-9205:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   8m 11s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 16s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 19s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m 28s | The applied patch generated  8 
new checkstyle issues (total was 203, now 207). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 32s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   3m 14s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 184m 37s | Tests failed in hadoop-hdfs. |
| | | 231m  3s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.TestFSImageWithSnapshot |
|   | hadoop.hdfs.TestDFSShell |
| Timed out tests | org.apache.hadoop.hdfs.TestReplication |
|   | org.apache.hadoop.hdfs.TestDecommission |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765729/h9205_20151009b.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / e1bf8b3 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12880/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12880/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12880/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12880/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12880/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12880/console |


This message was automatically generated.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-09 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14951322#comment-14951322
 ] 

Zhe Zhang commented on HDFS-9205:
-

Thanks Nicholas.

bq. Those blocks have zero replicas so that it is impossible to replicate them. 
(Let's ignore read-only storage here since it is an incomplete feature.)
Right, those blocks only have corrupt replicas. Before trying to replicate a 
block replica DN validates it based on almost the same conditions as NN's 
corrupt replica logic, with the following exception:
{code}
// DataNode#transferBlock
} catch (EOFException e) {
  lengthTooShort = true;
{code}
Basically, DN skips a replica only if it's too short, while NN considers a 
replica as corrupt when the size is different (larger or smaller) than the NN's 
copy.

The above is a very rare corner case, and I agree this is a good change to cut 
unnecessary NN=>DN traffic for tasks that will be filtered out later anyway.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch, h9205_20151009.patch, h9205_20151009b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-08 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14949752#comment-14949752
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9205:
---

Thanks Zhe for the comments.

> ... those blocks won't be re-replicated, even though 
> chooseUnderReplicatedBlocks returns them? Or they are re-replicated in the 
> current logic, but they should not be (IIUC that's the case)?

Those blocks have zero replicas so that it is impossible to replicate them. 
(Let's ignore read-only storage here since it is an incomplete feature.)

> ... But is there a use case for an admin to list corrupt blocks and reason 
> about them by accessing the local blk_ (and metadata) files? ...

This patch does not prevent that.

> If we do want to save the replication work for corrupt blocks, should we get 
> rid of QUEUE_WITH_CORRUPT_BLOCKS altogether?

The block priority could possibly be updated.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947718#comment-14947718
 ] 

Tsz Wo Nicholas Sze commented on HDFS-9205:
---

The failure of TestReadOnlySharedStorage actually is related -- the current 
implementation of read-only storage breaks the corrupt block definition.  It 
treats blocks with read-only replicas but no normal replicas as corrupt 
replicas.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14947759#comment-14947759
 ] 

Zhe Zhang commented on HDFS-9205:
-

Thanks Nicholas for the work. A few comments:
# bq. As a consequence, they cannot be replicated
Just to clarify, do you mean that even without the patch, those blocks won't be 
re-replicated, even though {{chooseUnderReplicatedBlocks}} returns them? Or 
they are re-replicated in the current logic, but they should not be (IIUC 
that's the case)?
# I agree that corrupt blocks are unreadable by HDFS client. But is there a use 
case for an admin to list corrupt blocks and reason about them by accessing the 
local {{blk_}} (and metadata) files? For example, there's a chance (although 
very rare) that the replica is intact and only the metadata file is corrupt.
# If we do want to save the replication work for corrupt blocks, should we get 
rid of {{QUEUE_WITH_CORRUPT_BLOCKS}} altogether?

Nit:
# This line of comment should be updated:
{code}
// and 5 blocks from QUEUE_WITH_CORRUPT_BLOCKS.
{code}

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9205) Do not schedule corrupt blocks for replication

2015-10-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14948021#comment-14948021
 ] 

Hadoop QA commented on HDFS-9205:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  26m 26s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |  13m 11s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  14m 50s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 28s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 19s | The applied patch generated  8 
new checkstyle issues (total was 203, now 207). |
| {color:red}-1{color} | whitespace |   0m  1s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   2m 13s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 54s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 38s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | native |   4m 57s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 127m 24s | Tests failed in hadoop-hdfs. |
| | | 196m 27s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
| Timed out tests | org.apache.hadoop.hdfs.server.namenode.TestINodeFile |
|   | org.apache.hadoop.hdfs.server.datanode.TestTriggerBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12765475/h9205_20151008.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / fde729f |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/artifact/patchprocess/whitespace.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/12845/console |


This message was automatically generated.

> Do not schedule corrupt blocks for replication
> --
>
> Key: HDFS-9205
> URL: https://issues.apache.org/jira/browse/HDFS-9205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Attachments: h9205_20151007.patch, h9205_20151007b.patch, 
> h9205_20151008.patch
>
>
> Corrupted blocks by definition are blocks cannot be read. As a consequence, 
> they cannot be replicated.  In UnderReplicatedBlocks, there is a queue for 
> QUEUE_WITH_CORRUPT_BLOCKS and chooseUnderReplicatedBlocks may choose blocks 
> from it.  It seems that scheduling corrupted block for replication is wasting 
> resource and potentially slow down replication for the higher priority blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)