[ 
https://issues.apache.org/jira/browse/HDFS-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10715:
------------------------------
    Assignee: Guangbin Zhu

> NPE when applying AvailableSpaceBlockPlacementPolicy
> ----------------------------------------------------
>
>                 Key: HDFS-10715
>                 URL: https://issues.apache.org/jira/browse/HDFS-10715
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.8.0
>         Environment: cdh5.8.0
>            Reporter: Guangbin Zhu
>            Assignee: Guangbin Zhu
>         Attachments: HDFS-10715.001.patch, HDFS-10715.002.patch, 
> HDFS-10715.003.patch, HDFS-10715.004.patch
>
>
> As HDFS-8131 introduced an AvailableSpaceBlockPlacementPolicy, but In some 
> cases, it caused NPE. 
> Here are my namenode daemon logs : 
> 2016-08-02 13:05:03,271 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 13 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 
> 10.132.89.79:14001 Call#56 Retry#0
> java.lang.NullPointerException
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.AvailableSpaceBlockPlacementPolicy.compareDataNode(AvailableSpaceBlockPlacementPolicy.java:95)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.AvailableSpaceBlockPlacementPolicy.chooseDataNode(AvailableSpaceBlockPlacementPolicy.java:80)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:691)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:665)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:572)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:457)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:367)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:242)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:114)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:130)
>         at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1606)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3315)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:679)
>         at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:214)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:489)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> I reviewed the source code, and found the bug in method chooseDataNode. 
> clusterMap.chooseRandom may return null, which cannot compare using equals 
> a.equals(b) method.  
> Though this exception can be caught, and then retry another call. I think 
> this bug should be fixed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to