On Wed, Jun 4, 2014 at 7:08 AM, Ian Brooks <i.bro...@sensewhere.com> wrote:
> Hi > > Well i performed the procedure on another 4 nodes today and no node fell > over from it. so perhaps I was just very unlucky with the 2 previous > attempts. There is an issue in DFSClient. Need to take a look... > When I shut down the datanodes I saw errors int he logs, but they are to > be expected and the servers just continued as normal after that. The only > slight oddity was the datanode in question for the log below was > 10.143.38.105 so I'm not sure why it was complaining about 10.143.38.112 > and 10.143.38.116 as they were both running and healthy. > > > Anything in logs of 10.143.38.112 around this time? The ERROR message is composed by catch on DataXceiver#run. It has 10.143.38.116 as the localAddress and .112 as remote (it closed the connection, the 'Connection reset by peer'?) St.Ack > > 014-06-04 13:57:07,641 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: > BP-2121456822-10.143.38.149-1396953188241:blk_1074085660_344913, > type=HAS_DOWNSTREAM_IN_PIPELINE > java.io.EOFException: Premature EOF: no length prefix available > at > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1883) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:958) > at java.lang.Thread.run(Thread.java:744) > 2014-06-04 13:57:07,646 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for > BP-2121456822-10.143.38.149-1396953188241:blk_1074085660_344913 > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > at sun.nio.ch.IOUtil.read(IOUtil.java:197) > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) > at > org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) > at > org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) > at > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) > at > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) > at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at java.io.DataInputStream.read(DataInputStream.java:149) > at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:442) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:701) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:572) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) > at java.lang.Thread.run(Thread.java:744) > 2014-06-04 13:57:07,647 WARN > org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in > BlockReceiver.run(): > java.nio.channels.ClosedByInterruptException > at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) > at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496) > at > org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) > at > org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) > at > org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) > at > org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) > at > java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) > at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) > at java.io.DataOutputStream.flush(DataOutputStream.java:123) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1087) > at java.lang.Thread.run(Thread.java:744) > 2014-06-04 13:57:07,648 WARN > org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError: exception: > java.nio.channels.ClosedByInterruptException > at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) > at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496) > at > org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) > at > org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) > at > org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) > at > org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) > at > java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) > at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) > at java.io.DataOutputStream.flush(DataOutputStream.java:123) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1087) > at java.lang.Thread.run(Thread.java:744) > 2014-06-04 13:57:07,648 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: Not checking disk as > checkDiskError was called on a network related exception > 2014-06-04 13:57:07,648 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: > BP-2121456822-10.143.38.149-1396953188241:blk_1074085660_344913, > type=HAS_DOWNSTREAM_IN_PIPELINE > java.nio.channels.ClosedByInterruptException > at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) > at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:496) > at > org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) > at > org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) > at > org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) > at > org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) > at > java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) > at > java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) > at java.io.DataOutputStream.flush(DataOutputStream.java:123) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1087) > at java.lang.Thread.run(Thread.java:744) > 2014-06-04 13:57:07,648 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: > BP-2121456822-10.143.38.149-1396953188241:blk_1074085660_344913, > type=HAS_DOWNSTREAM_IN_PIPELINE terminating > 2014-06-04 13:57:07,648 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock > BP-2121456822-10.143.38.149-1396953188241:blk_1074085660_344913 received > exception java.io.IOException: Connection reset by peer > 2014-06-04 13:57:07,649 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: > sw-hadoop-007:50010:DataXceiver error processing WRITE_BLOCK operation > src: /10.143.38.112:24190 dest: /10.143.38.116:50010 > java.io.IOException: Connection reset by peer > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) > at sun.nio.ch.IOUtil.read(IOUtil.java:197) > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) > at > org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) > at > org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) > at > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) > at > org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) > at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) > at java.io.BufferedInputStream.read(BufferedInputStream.java:334) > at java.io.DataInputStream.read(DataInputStream.java:149) > at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) > at > org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:442) > at > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:701) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:572) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115) > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68) > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221) > at java.lang.Thread.run(Thread.java:744) > > > -Ian Brooks > > On Tuesday 03 Jun 2014 15:59:11 Stack wrote: > > On Tue, Jun 3, 2014 at 9:18 AM, Ian Brooks <i.bro...@sensewhere.com> > wrote: > > > > > Hi, > > > > > > Well checking the hadoop logs shows the datanode restarting at that > time. > > > looks like a rouge puppet config decided to restart the datanode. > > > > > > That said, should the regionserver not account for this and request the > > > data from another datanode? > > > > > > > Yes. Should. Something is odd in DFSClient when we get: > > > > org.apache.hadoop.ipc.RemoteException(java.lang. > > ArrayIndexOutOfBoundsException): 0 > > > > when it seems like there are plenty of replicas still according to: > > > > DFSClient: Error Recovery for block BP-2121456822-10.143.38.149- > > 1396953188241:blk_1074073683_332932 in pipeline 10.143.38.117:50010, > > 10.143.38.116:50010, 10.143.38.100:50010: bad datanode > 10.143.38.117:50010 > > 2014-06-03 13:05:03,915 WARN [DataStreamer for file > > /user/hbase/WALs/############,16020,1401716790638/########## > > ##%2C16020%2C1401716790638.1401796562200 block > BP-2121456822-10.143.38.149- > > 1396953188241:blk_1074073683_332932] hdfs.DFSClient: DataStreamer > Exception > > > > > > You can make is happen easily? It happened to you twice? > > > > St.Ack > > > > > > -Ian Brooks > > > > On Tuesday 03 Jun 2014 08:35:05 Stack wrote: > > > Anything in the namenode logs Ian? Its like we've run out of replicas. > > We > > > see this: > > > > > > 2014-06-03 13:05:03,898 WARN [DataStreamer for file > > > /user/hbase/WALs/############,16020,1401716790638/########## > > > ##%2C16020%2C1401716790638.1401796562200 block > > BP-2121456822-10.143.38.149- > > > 1396953188241:blk_1074073683_332932] hdfs.DFSClient: Error Recovery for > > > block BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932 > in > > > pipeline 10.143.38.117:50010, 10.143.38.116:50010, 10.143.38.100:50010 > : > > bad > > > datanode 10.143.38.117:50010 > > > > > > I presume 10.143.38.117:50010 is the local datanode? We should just > be > > > moving on to 10.143.38.116:50010 but array index out of bounds with > size > > of > > > 0. > > > > > > When is this happening? When you close the RS after all regions have > been > > > moved off? Anything earlier in the log? > > > > > > St.Ack > > > > > > > > > > > > > > > On Tue, Jun 3, 2014 at 6:12 AM, Ian Brooks <i.bro...@sensewhere.com> > > wrote: > > > > > > > Hi, > > > > > > > > For my testing i'm only taking one server out, ( simulating process > for > > > > patching etc. ). The hadoop datanode process was left running at this > > point. > > > > > > > > -Ian Brooks > > > > > > > > On Tuesday 03 Jun 2014 06:06:33 Ted Yu wrote: > > > > > Please see http://hbase.apache.org/book/node.management.html > > > > > Especially 15.3.1.1 > > > > > > > > > > Did you stop the datanode on that server or let datanode run ? > > > > > > > > > > Cheers > > > > > > > > > > On Jun 3, 2014, at 5:47 AM, Ian Brooks <i.bro...@sensewhere.com> > > wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > I've been working on testing the procedures for taking a node > out of > > > > service in our test cluster and have come accross the same problem in > > both > > > > attempts now whereby once a regionserver has been shutdown, 5-10 > minutes > > > > later most of the other regionservers crash with the below stack > trace > > > > > > > > > > > > The procedure im using for taking the first node out is, > > > > > > > > > > > > 1. turn off balancer > > > > > > 2. use region_mover.rb to unload all regions from the target > serve > > > > > > 3.once all regions have been moved, stop the region server > > > > > > > > > > > > I'm using hbase 0.96.1 on hadoop 2.3.0, any ideas why this is > > > > happening and how to stop it from happening? > > > > > > > > > > > > > > > > > > 2014-06-03 13:05:03,897 WARN [ResponseProcessor for block > > > > BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] > > > > hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for > block > > > > BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932 > > > > > > java.io.EOFException: Premature EOF: no length prefix available > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1492) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:721) > > > > > > 2014-06-03 13:05:03,898 WARN [DataStreamer for file > > > > > > > /user/hbase/WALs/############,16020,1401716790638/############%2C16020%2C1401716790638.1401796562200 > > > > block > BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] > > > > hdfs.DFSClient: Error Recovery for block > > > > BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932 in > > pipeline > > > > 10.143.38.117:50010, 10.143.38.116:50010, 10.143.38.100:50010: bad > > > > datanode 10.143.38.117:50010 > > > > > > 2014-06-03 13:05:03,915 WARN [DataStreamer for file > > > > > > > /user/hbase/WALs/############,16020,1401716790638/############%2C16020%2C1401716790638.1401796562200 > > > > block > BP-2121456822-10.143.38.149-1396953188241:blk_1074073683_332932] > > > > hdfs.DFSClient: DataStreamer Exception > > > > > > > > > > > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > > > 0 > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > > > > > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958) > > > > > > at java.security.AccessController.doPrivileged(Native > Method) > > > > > > at javax.security.auth.Subject.doAs(Subject.java:415) > > > > > > at > > > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > > > > > > at > org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956) > > > > > > > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > > > > > > at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > > > > > at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown > > Source) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266) > > > > > > at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) > > > > > > 2014-06-03 13:05:48,489 ERROR [RpcServer.handler=22,port=16020] > > > > wal.FSHLog: syncer encountered error, will retry. txid=211 > > > > > > > > > > > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > > > 0 > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > > > > > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958) > > > > > > at java.security.AccessController.doPrivileged(Native > Method) > > > > > > at javax.security.auth.Subject.doAs(Subject.java:415) > > > > > > at > > > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > > > > > > at > org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956) > > > > > > > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > > > > > > at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > > > > > at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown > > Source) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266) > > > > > > at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) > > > > > > 2014-06-03 13:05:48,489 FATAL [RpcServer.handler=22,port=16020] > > > > wal.FSHLog: Could not sync. Requesting roll of hlog > > > > > > > > > > > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > > > 0 > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > > > > > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958) > > > > > > at java.security.AccessController.doPrivileged(Native > Method) > > > > > > at javax.security.auth.Subject.doAs(Subject.java:415) > > > > > > at > > > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > > > > > > at > org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956) > > > > > > > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > > > > > > at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > > > > > at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown > > Source) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266) > > > > > > at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) > > > > > > 2014-06-03 13:05:48,490 DEBUG [regionserver16020.logRoller] > > > > regionserver.LogRoller: HLog roll requested > > > > > > 2014-06-03 13:05:48,490 DEBUG [RpcServer.handler=22,port=16020] > > > > regionserver.HRegion: rollbackMemstore rolled back 1 keyvalues from > > start:0 > > > > to end:1 > > > > > > 2014-06-03 13:05:48,609 DEBUG [regionserver16020.logRoller] > > > > wal.FSHLog: cleanupCurrentWriter waiting for transactions to get > synced > > > > total 211 synced till here 210 > > > > > > 2014-06-03 13:05:48,609 FATAL [regionserver16020.logRoller] > > > > wal.FSHLog: Could not sync. Requesting roll of hlog > > > > > > > > > > > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > > > 0 > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > > > > > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958) > > > > > > at java.security.AccessController.doPrivileged(Native > Method) > > > > > > at javax.security.auth.Subject.doAs(Subject.java:415) > > > > > > at > > > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > > > > > > at > org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956) > > > > > > > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > > > > > > at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > > > > > at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown > > Source) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266) > > > > > > at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) > > > > > > 2014-06-03 13:05:48,609 ERROR [regionserver16020.logRoller] > > > > wal.FSHLog: Failed close of HLog writer > > > > > > > > > > > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > > > 0 > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > > > > > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958) > > > > > > at java.security.AccessController.doPrivileged(Native > Method) > > > > > > at javax.security.auth.Subject.doAs(Subject.java:415) > > > > > > at > > > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > > > > > > at > org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956) > > > > > > > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > > > > > > at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > > > > > at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown > > Source) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266) > > > > > > at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) > > > > > > 2014-06-03 13:05:48,610 FATAL [regionserver16020.logRoller] > > > > regionserver.HRegionServer: ABORTING region server > > > > ############,16020,1401716790638: Failed log close in log roller > > > > > > org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException: > > > > #1401796562200 > > > > > > at > > > > > > > org.apache.hadoop.hbase.regionserver.wal.FSHLog.cleanupCurrentWriter(FSHLog.java:707) > > > > > > at > > > > > > > org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:538) > > > > > > at > > > > org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:96) > > > > > > at java.lang.Thread.run(Thread.java:744) > > > > > > Caused by: > > > > > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > > > 0 > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > > > > > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958) > > > > > > at java.security.AccessController.doPrivileged(Native > Method) > > > > > > at javax.security.auth.Subject.doAs(Subject.java:415) > > > > > > at > > > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > > > > > > at > org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956) > > > > > > > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > > > > > > at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > > > > > at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown > > Source) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266) > > > > > > at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) > > > > > > 2014-06-03 13:05:48,610 FATAL [regionserver16020.logRoller] > > > > regionserver.HRegionServer: RegionServer abort: loaded coprocessors > are: > > > > [org.apache.hadoop.hbase.security.access.AccessController, > > > > org.apache.hadoop.hbase.security.token.TokenProvider] > > > > > > 2014-06-03 13:05:48,612 ERROR [RpcServer.handler=21,port=16020] > > > > wal.FSHLog: syncer encountered error, will retry. txid=212 > > > > > > > > > > > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > > > 0 > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > > > > > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958) > > > > > > at java.security.AccessController.doPrivileged(Native > Method) > > > > > > at javax.security.auth.Subject.doAs(Subject.java:415) > > > > > > at > > > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > > > > > > at > org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956) > > > > > > > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > > > > > > at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > > > > > at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown > > Source) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266) > > > > > > at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) > > > > > > 2014-06-03 13:05:48,612 FATAL [RpcServer.handler=21,port=16020] > > > > wal.FSHLog: Could not sync. Requesting roll of hlog > > > > > > > > > > > > > org.apache.hadoop.ipc.RemoteException(java.lang.ArrayIndexOutOfBoundsException): > > > > 0 > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.getDatanodeStorageInfos(DatanodeManager.java:467) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:2779) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:594) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolServerSideTranslatorPB.java:430) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) > > > > > > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1962) > > > > > > at > > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1958) > > > > > > at java.security.AccessController.doPrivileged(Native > Method) > > > > > > at javax.security.auth.Subject.doAs(Subject.java:415) > > > > > > at > > > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > > > > > > at > org.apache.hadoop.ipc.Server$Handler.run(Server.java:1956) > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1347) > > > > > > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > > > > > > at > > > > > > > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > > > > > > at com.sun.proxy.$Proxy13.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(ClientNamenodeProtocolTranslatorPB.java:352) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > > > > > > at > > > > > > > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > > > > > > at com.sun.proxy.$Proxy14.getAdditionalDatanode(Unknown > > Source) > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > > Method) > > > > > > at > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > > > > > > at > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > > > > > at java.lang.reflect.Method.invoke(Method.java:606) > > > > > > at > > > > org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266) > > > > > > at com.sun.proxy.$Proxy15.getAdditionalDatanode(Unknown > > Source) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:919) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1031) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:823) > > > > > > at > > > > > > > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:475) > > > > > > 2014-06-03 13:05:48,612 DEBUG [RpcServer.handler=21,port=16020] > > > > regionserver.HRegion: rollbackMemstore rolled back 1 keyvalues from > > start:0 > > > > to end:1 > > > > > > 2014-06-03 13:05:48,624 INFO [regionserver16020.logRoller] > > > > regionserver.HRegionServer: STOPPED: Failed log close in log roller > > > > > > 2014-06-03 13:05:48,624 INFO [regionserver16020.logRoller] > > > > regionserver.LogRoller: LogRoller exiting. > > > > > > 2014-06-03 13:05:48,624 INFO [regionserver16020] ipc.RpcServer: > > > > Stopping server on 16020 > > > > > > 2014-06-03 13:05:48,624 INFO [RpcServer.handler=1,port=16020] > > > > ipc.RpcServer: RpcServer.handler=1,port=16020: exiting > > > > > > > > > > > > > > > > > > > > > -- > > > > -Ian Brooks > > > > Senior server administrator - Sensewhere > > > > > > -- > > -Ian Brooks > > Senior server administrator - Sensewhere > -- > -Ian Brooks > Senior server administrator - Sensewhere >