[ 
https://issues.apache.org/jira/browse/HDFS-9083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15037293#comment-15037293
 ] 

Xiaoyu Yao commented on HDFS-9083:
----------------------------------

The 2.7 patch caused failure of TestBalancer#testBalancerWithPinnedBlocks. The 
test was passing without this patch. 
[~shahrs87], can you take a look?

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.balancer.TestBalancer
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.888 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
testBalancerWithPinnedBlocks(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
  Time elapsed: 12.748 sec  <<< FAILURE!
java.lang.AssertionError: expected:<-3> but was:<0>
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.failNotEquals(Assert.java:743)
        at org.junit.Assert.assertEquals(Assert.java:118)
        at org.junit.Assert.assertEquals(Assert.java:555)
        at org.junit.Assert.assertEquals(Assert.java:542)
        at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithPinnedBlocks(TestBalancer.java:362)


Results :

Failed tests: 
  TestBalancer.testBalancerWithPinnedBlocks:362 expected:<-3> but was:<0>



> Replication violates block placement policy.
> --------------------------------------------
>
>                 Key: HDFS-9083
>                 URL: https://issues.apache.org/jira/browse/HDFS-9083
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.6.0
>            Reporter: Rushabh S Shah
>            Assignee: Rushabh S Shah
>            Priority: Blocker
>             Fix For: 2.7.2, 2.6.3
>
>         Attachments: HDFS-9083-branch-2.6.patch, HDFS-9083-branch-2.7.patch
>
>
> Recently we are noticing many cases in which all the replica of the block are 
> residing on the same rack.
> During the block creation, the block placement policy was honored.
> But after node failure event in some specific manner, the block ends up in 
> such state.
> On investigating more I found out that BlockManager#blockHasEnoughRacks is 
> dependent on the config (net.topology.script.file.name)
> {noformat}
>  if (!this.shouldCheckForEnoughRacks) {
>       return true;
>     }
> {noformat}
> We specify DNSToSwitchMapping implementation (our own custom implementation) 
> via net.topology.node.switch.mapping.impl and no longer use 
> net.topology.script.file.name config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to