loss of VERSION file on datanode when trying to startup with full disk
----------------------------------------------------------------------

                 Key: HADOOP-2550
                 URL: https://issues.apache.org/jira/browse/HADOOP-2550
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.14.4
         Environment: FCLinux
            Reporter: Joydeep Sen Sarma
            Priority: Critical


datanode working ok previously. subsequent bringup of datanode fails:

/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = hadoop003.sf2p.facebook.com/10.16.159.103
STARTUP_MSG:   args = []
************************************************************/
2008-01-08 08:23:38,400 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
Initializing JVM Metrics with processN
ame=DataNode, sessionId=null
2008-01-08 08:23:48,491 INFO org.apache.hadoop.ipc.RPC: Problem connecting to 
server: hadoop001.sf2p.facebook
.com/10.16.159.101:9000
2008-01-08 08:23:59,495 INFO org.apache.hadoop.ipc.RPC: Problem connecting to 
server: hadoop001.sf2p.facebook
.com/10.16.159.101:9000
2008-01-08 08:24:01,597 ERROR org.apache.hadoop.dfs.DataNode: 
java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:260)
        at sun.nio.cs.StreamEncoder$CharsetSE.writeBytes(StreamEncoder.java:336)
        at 
sun.nio.cs.StreamEncoder$CharsetSE.implFlushBuffer(StreamEncoder.java:404)
        at sun.nio.cs.StreamEncoder$CharsetSE.implFlush(StreamEncoder.java:408)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:152)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:213)
        at java.io.BufferedWriter.flush(BufferedWriter.java:236)
        at java.util.Properties.store(Properties.java:666)
        at 
org.apache.hadoop.dfs.Storage$StorageDirectory.write(Storage.java:176)
        at 
org.apache.hadoop.dfs.Storage$StorageDirectory.write(Storage.java:164)
        at org.apache.hadoop.dfs.Storage.writeAll(Storage.java:510)
        at 
org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:146)
        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:243)
        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:206)
        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:1391)
        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1335)
        at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:1356)
        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1525)

2008-01-08 08:24:01,597 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at 
hadoop003.sf2p.facebook.com/10.16.159.103


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to