I was trying to put a 1 gig file onto HDFS and I got the following error:

09/03/10 18:23:16 WARN hdfs.DFSClient: DataStreamer Exception:
java.net.SocketTimeoutException: 5000 millis timeout while waiting for
channel to be ready for write. ch :
java.nio.channels.SocketChannel[connected local=/171.69.102.53:34414remote=/
171.69.102.51:50010]
    at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:162)
    at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
    at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
    at java.io.BufferedOutputStream.write(Unknown Source)
    at java.io.DataOutputStream.write(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2209)

09/03/10 18:23:16 WARN hdfs.DFSClient: Error Recovery for block
blk_2971879428934911606_36678 bad datanode[0] 171.69.102.51:50010
put: All datanodes 171.69.102.51:50010 are bad. Aborting...
Exception closing file /user/amkhuran/221rawdata/1g
java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
    at
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
    at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
    at
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
    at org.apache.hadoop.fs.FsShell.close(FsShell.java:1842)
    at org.apache.hadoop.fs.FsShell.main(FsShell.java:1856)


Whats going wrong?

Amandeep


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz

Reply via email to