HDFS-457 increases the chances of losing blocks
------------------------------------------------

                 Key: HDFS-1158
                 URL: https://issues.apache.org/jira/browse/HDFS-1158
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: data-node
    Affects Versions: 0.21.0
            Reporter: Koji Noguchi
            Priority: Critical


Whenever we restart a cluster, there's a chance of losing some blocks if more 
than three datanodes don't come up.
HDFS-457 increases this chance by keeping the datanodes up even when 
   # /tmp disk goes read-only
   # /disk0 that is used for storing PID goes read-only 
and probably more.

In our environment, /tmp and /disk0 are from the same device.

When trying to restart a datanode, it would fail with
1) 
{noformat}
2010-05-15 05:45:45,575 WARN org.mortbay.log: tmpdir
java.io.IOException: Read-only file system
        at java.io.UnixFileSystem.createFileExclusively(Native Method)
        at java.io.File.checkAndCreate(File.java:1704)
        at java.io.File.createTempFile(File.java:1792)
        at java.io.File.createTempFile(File.java:1828)
        at 
org.mortbay.jetty.webapp.WebAppContext.getTempDirectory(WebAppContext.java:745)
{noformat}
or 
2) 
{noformat}
hadoop-daemon.sh: line 117: /disk/0/hadoop-datanode....com.out: Read-only file 
system
hadoop-daemon.sh: line 118: /disk/0/hadoop-datanode.pid: Read-only file system
{noformat}

I can recover the missing blocks but it takes some time.

Also, we are losing track of block movements since log directory can also go to 
read-only but datanode would continue running.

For 0.21 release, can we revert HDFS-457 or make it configurable?


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to