Jianbiao Huang created HBASE-8331:
-------------------------------------

             Summary: DataXceiver java.io.IOException: Interrupted receiveBlock
                 Key: HBASE-8331
                 URL: https://issues.apache.org/jira/browse/HBASE-8331
             Project: HBase
          Issue Type: Bug
          Components: regionserver
    Affects Versions: 0.94.0
            Reporter: Jianbiao Huang



------------------------------------------------------------------------------
datanode log:

2013-04-12 09:08:15,100 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(192.168.10.253:50010, 
storageID=DS-1973019085-192.168.10.253-50010-1365402703803, infoPort=50075, 
ipcPort=50020):DataXceiver
java.io.IOException: Interrupted receiveBlock
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:577)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:398)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:107)
        at java.lang.Thread.run(Thread.java:662)
2013-04-12 09:08:15,109 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
oldblock=blk_5082034066542916448_46164(length=33759255), 
newblock=blk_5082034066542916448_46247(length=33759255), 
datanode=192.168.10.253:50010


------------------------------------------------------------------------------
regionserver log:

2013-04-12 09:08:34,955 WARN org.apache.hadoop.hdfs.DFSClient: DFSOutputStream 
ResponseProcessor exception  for block 
blk_5082034066542916448_46164java.io.EOFException
        at java.io.DataInputStream.readFully(DataInputStream.java:180)
        at java.io.DataInputStream.readLong(DataInputStream.java:399)
        at 
org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2964)
        
2013-04-12 09:08:34,977 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery 
for block blk_5082034066542916448_46164 bad datanode[0] 192.168.10.253:50010
2013-04-12 09:08:34,978 WARN org.apache.hadoop.hdfs.DFSClient: Error while 
syncing
java.io.IOException: All datanodes 192.168.10.253:50010 are bad. Aborting...
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3093)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2586)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2790)
2013-04-12 09:08:34,978 FATAL org.apache.hadoop.hbase.regionserver.wal.HLog: 
Could not sync. Requesting close of hlog
java.io.IOException: All datanodes 192.168.10.253:50010 are bad. Aborting...
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3093)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2586)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2790)
2013-04-12 09:08:34,978 FATAL org.apache.hadoop.hbase.regionserver.wal.HLog: 
Could not sync. Requesting close of hlog
java.io.IOException: All datanodes 192.168.10.253:50010 are bad. Aborting...
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3093)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2586)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2790)



------------------------------------------------------------------------------



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to