[ https://issues.apache.org/jira/browse/HDFS-8093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14696757#comment-14696757 ]
Felix Borchers commented on HDFS-8093: -------------------------------------- I have a very similar problem while running the balancer. {{hdfs fsck /}} returned HEALTHY and the block, causing the balancer to throw an exception is not in the HDFS anymore. {{hdfs fsck / -files -blocks | grep blk_1074256920_516292}} - returned nothing Digging in the logs of the DataNode shows, that the block was deleted on the node. (see below for log file excerpt) Digging in the logs of the NameNode shows, something like "block .... does not belong to any file" (see below for log file excerpt) It seems, there is a problem with removed/deleted blocks ?! DataNode Logs ============= only lines with: blk_1074256920_516292 displayed: {code} 2015-08-14 00:30:03,893 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292 src: /10.13.53.16:37605 dest: /10.13.53.19:50010 2015-08-14 00:30:07,841 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1074256920_516292 file /data/is24/hadoop/1/dfs/dataNode/current/BP-322804774-10.13.54.1-1412684451669/current/rbw/blk_1074256920 for deletion 2015-08-14 00:30:09,092 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-322804774-10.13.54.1-1412684451669 blk_1074256920_516292 file /data/is24/hadoop/1/dfs/dataNode/current/BP-322804774-10.13.54.1-1412684451669/current/rbw/blk_1074256920 org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot append to a non-existent replica BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292 2015-08-14 00:46:44,916 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292, type=LAST_IN_PIPELINE, downstreams=0:[] org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot append to a non-existent replica BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292 2015-08-14 00:46:44,916 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292, type=LAST_IN_PIPELINE, downstreams=0:[] terminating {code} NameNode Logs ============= only lines with: blk_1074256920_516292 displayed: {code} 2015-08-14 00:30:03,843 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /system/balancer.id. BP-322804774-10.13.54.1-1412684451669 blk_1074256920_516292{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-4db312aa-bc23-47dc-b768-52a2d72b09d3:NORMAL:10.13.53.30:50010|RBW], ReplicaUnderConstruction[[DISK]DS-c7db1b58-8e25-435f-8af8-08b6754c021c:NORMAL:10.13.53.16:50010|RBW], ReplicaUnderConstruction[[DISK]DS-4457ae11-7684-4187-b4ad-56466d79fba2:NORMAL:10.13.53.19:50010|RBW]]} 2015-08-14 00:30:04,000 INFO BlockStateChange: BLOCK* addBlock: c blk_1074256920_516292 on node 10.13.53.16:50010 size 134217728 does not belong to any file 2015-08-14 00:30:04,000 INFO BlockStateChange: BLOCK* InvalidateBlocks: add blk_1074256920_516292 to 10.13.53.16:50010 2015-08-14 00:30:04,000 INFO BlockStateChange: BLOCK* BlockManager: ask 10.13.53.16:50010 to delete [blk_1074256920_516292] 2015-08-14 00:30:04,840 INFO BlockStateChange: BLOCK* addBlock: block blk_1074256920_516292 on node 10.13.53.19:50010 size 134217728 does not belong to any file 2015-08-14 00:30:04,840 INFO BlockStateChange: BLOCK* InvalidateBlocks: add blk_1074256920_516292 to 10.13.53.19:50010 2015-08-14 00:30:05,925 INFO BlockStateChange: BLOCK* addBlock: block blk_1074256920_516292 on node 10.13.53.30:50010 size 134217728 does not belong to any file 2015-08-14 00:30:05,925 INFO BlockStateChange: BLOCK* InvalidateBlocks: add blk_1074256920_516292 to 10.13.53.30:50010 2015-08-14 00:30:07,000 INFO BlockStateChange: BLOCK* BlockManager: ask 10.13.53.19:50010 to delete [blk_1074208004_467362, blk_1074224392_483753, blk_1074093070_352362, blk_1074240530_499900, blk_1074256920_516292, blk_1074224154_483515, blk_1074240554_499924, blk_1074240556_499926, blk_1074240561_499931, blk_1074224178_483539, blk_1074240563_499933, blk_1074207795_467153, blk_1074093108_352429, blk_1074207797_467155, blk_1073798197_57374, blk_1074224182_483543, blk_1074240569_499939, blk_1074207802_467160, blk_1074224187_483548, blk_1074224188_483549, blk_1074207805_467163, blk_1074158653_418001, blk_1074207806_467164, blk_1074224191_483552, blk_1074207809_467167, blk_1074207817_467175, blk_1074207818_467176, blk_1074207820_467178, blk_1074207822_467180, blk_1074207830_467188, blk_1074224216_483577, blk_1074224217_483578, blk_1073798237_57414, blk_1073929310_188502, blk_1074207843_467201, blk_1073847400_106577, blk_1074207852_467210, blk_1074207856_467214, blk_1074207868_467226, blk_1074207870_467228, blk_1074207871_467229, blk_1073912966_172154, blk_1073781895_41072, blk_1074207879_467237, blk_1074093193_352516, blk_1074207881_467239, blk_1074207884_467242, blk_1074142350_401677, blk_1074207887_467245, blk_1074207891_467249, blk_1074224277_483638, blk_1074224278_483639, blk_1074207895_467253, blk_1074207897_467255, blk_1074240668_500038, blk_1074240672_500042, blk_1074207907_467265, blk_1074207908_467266, blk_1074207912_467270, blk_1074191530_450887, blk_1074191533_450890, blk_1074207918_467276, blk_1074191534_450891, blk_1074224303_483664, blk_1074224304_483665, blk_1074224306_483667, blk_1074207927_467285, blk_1074191547_450904, blk_1074224316_483677, blk_1074011325_270529, blk_1074191550_450907, blk_1074240706_500076, blk_1073863875_123053, blk_1073863884_123062, blk_1074093266_352589, blk_1074093267_352590, blk_1074093268_352591, blk_1074093269_352592, blk_1074093270_352593, blk_1074175191_434548, blk_1074093271_352594, blk_1074093272_352595, blk_1074175192_434549, blk_1074093273_352596, blk_1074175193_434550, blk_1074093274_352597, blk_1074093275_352598, blk_1074175195_434552, blk_1074093276_352599, blk_1074207965_467323, blk_1074175198_434555, blk_1074175199_434556, blk_1073798369_57546, blk_1074191589_450946, blk_1074191590_450947, blk_1074224365_483726, blk_1074224366_483727, blk_1074207982_467340, blk_1074224379_483740] 2015-08-14 00:30:07,000 INFO BlockStateChange: BLOCK* BlockManager: ask 10.13.53.30:50010 to delete [blk_1074256368_515740, blk_1074256514_515886, blk_1074256371_515743, blk_1074256372_515744, blk_1074256517_515889, blk_1074256920_516292, blk_1074256382_515754] 2015-08-14 00:46:44,953 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.IOException: BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292 does not exist or is not under Constructionnull java.io.IOException: BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292 does not exist or is not under Constructionnull {code} balancer logs =========== {code} 2015-08-14 00:30:02,889 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: Using a threshold of 5.0 ... 2015-08-14 00:46:44,903 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /racks/rack/10.13.53.11:50010 2015-08-14 00:46:44,903 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /racks/rack/10.13.53.13:50010 2015-08-14 00:46:44,903 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /racks/rack/10.13.53.14:50010 2015-08-14 00:46:44,903 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /racks/rack/10.13.53.30:50010 2015-08-14 00:46:44,903 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /racks/rack/10.13.53.15:50010 2015-08-14 00:46:44,903 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /racks/rack/10.13.53.17:50010 2015-08-14 00:46:44,903 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /racks/rack/10.13.53.12:50010 2015-08-14 00:46:44,903 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /racks/rack/10.13.53.18:50010 2015-08-14 00:46:44,903 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /racks/rack/10.13.53.16:50010 2015-08-14 00:46:44,903 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /racks/rack/10.13.53.19:50010 2015-08-14 00:46:44,903 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 1 over-utilized: [10.13.53.30:50010:DISK] 2015-08-14 00:46:44,903 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 0 underutilized: [] 2015-08-14 00:46:44,903 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: Need to move 3.21 GB to make the cluster balanced. 2015-08-14 00:46:44,904 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: Decided to move 1.17 GB bytes from 10.13.53.30:50010:DISK to 10.13.53.11:50010:DISK 2015-08-14 00:46:44,904 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: Will move 1.17 GB in this iteration 2015-08-14 00:46:44,919 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292 java.io.IOException: Bad response ERROR for block BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292 from datanode DatanodeInfoWithStorage[10.13.53.19:50010,DS-4457ae11-7684-4187-b4ad-56466d79fba2,DISK] at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:909) 2015-08-14 00:46:44,922 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery for block BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292 in pipeline DatanodeInfoWithStorage[10.13.53.30:50010,DS-4db312aa-bc23-47dc-b768-52a2d72b09d3,DISK], DatanodeInfoWithStorage[10.13.53.16:50010,DS-c7db1b58-8e25-435f-8af8-08b6754c021c,DISK], DatanodeInfoWithStorage[10.13.53.19:50010,DS-4457ae11-7684-4187-b4ad-56466d79fba2,DISK]: bad datanode DatanodeInfoWithStorage[10.13.53.19:50010,DS-4457ae11-7684-4187-b4ad-56466d79fba2,DISK] 2015-08-14 00:46:44,958 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292 does not exist or is not under Constructionnull at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:7027) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:7094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:748) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.updateBlockForPipeline(AuthorizationProviderProxyClientProtocol.java:637) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:932) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at org.apache.hadoop.ipc.Client.call(Client.java:1468) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy11.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:877) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy12.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1278) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1016) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:560) 2015-08-14 00:46:44,978 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to close inode 709042 org.apache.hadoop.ipc.RemoteException(java.io.IOException): BP-322804774-10.13.54.1-1412684451669:blk_1074256920_516292 does not exist or is not under Constructionnull at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:7027) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:7094) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:748) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.updateBlockForPipeline(AuthorizationProviderProxyClientProtocol.java:637) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:932) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) at org.apache.hadoop.ipc.Client.call(Client.java:1468) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy11.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:877) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy12.updateBlockForPipeline(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1278) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1016) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:560) {code} > BP does not exist or is not under Constructionnull > -------------------------------------------------- > > Key: HDFS-8093 > URL: https://issues.apache.org/jira/browse/HDFS-8093 > Project: Hadoop HDFS > Issue Type: Bug > Components: balancer & mover > Affects Versions: 2.6.0 > Environment: Centos 6.5 > Reporter: LINTE > > HDFS balancer run during several hours blancing blocs beetween datanode, it > ended by failing with the following error. > getStoredBlock function return a null BlockInfo. > java.io.IOException: Bad response ERROR for block > BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 from > datanode 192.168.0.18:1004 > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:897) > 15/04/08 05:52:51 WARN hdfs.DFSClient: Error Recovery for block > BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 in pipeline > 192.168.0.63:1004, 192.168.0.1:1004, 192.168.0.18:1004: bad datanode > 192.168.0.18:1004 > 15/04/08 05:52:51 WARN hdfs.DFSClient: DataStreamer Exception > org.apache.hadoop.ipc.RemoteException(java.io.IOException): > BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 does not > exist or is not under Constructionnull > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6913) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6980) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:717) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:931) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) > at org.apache.hadoop.ipc.Client.call(Client.java:1468) > at org.apache.hadoop.ipc.Client.call(Client.java:1399) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy11.updateBlockForPipeline(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:877) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy12.updateBlockForPipeline(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1266) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1004) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:548) > 15/04/08 05:52:51 ERROR hdfs.DFSClient: Failed to close inode 19801755 > org.apache.hadoop.ipc.RemoteException(java.io.IOException): > BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 does not > exist or is not under Constructionnull > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6913) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6980) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:717) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:931) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) > at org.apache.hadoop.ipc.Client.call(Client.java:1468) > at org.apache.hadoop.ipc.Client.call(Client.java:1399) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy11.updateBlockForPipeline(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolTranslatorPB.java:877) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy12.updateBlockForPipeline(Unknown Source) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1266) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1004) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:548) -- This message was sent by Atlassian JIRA (v6.3.4#6332)