All,

I have a small hadoop cluster (2.5.0) with 4 datanodes and 3 data disks per node. Lately some of the volumes have been filling, but instead of moving to other configured volumes that *have* free space, it's giving errors in the datanode logs: 2014-10-03 11:52:44,989 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: thor2.xmen.eti:50010:DataXceiver error processing WRITE_BLOCK
 operation  src: /172.17.1.3:35412 dst: /172.17.1.2:50010
java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:345)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:592) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:734) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:741) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:234)
    at java.lang.Thread.run(Thread.java:745)

Unfortunately it's continuing to try to write and when it fails, it's passing the exception to the client.

I did a restart and then it seemed to figure out that it should move to the next volume.

Any suggestions to keep this from happening in the future?

Also - could it be an issue that I have a small amount of non-HDFS data on those volumes?

Thanks,
Brian

Reply via email to