[ 
https://issues.apache.org/jira/browse/HDFS-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-2111:
--------------------------

    Attachment: HDFS-2111.r1.diff

Attached patch that adds a new test case to TestDataNodeVolumeFailureToleration 
that tests start-up DiskChecker failures as well.

To ensure no permission issues crop up due to the patch, I did the following 
runs, but please review the additions and let me know if I missed some cleanup 
or tail fixing operation:
{code}
ant test -Dtestcase=TestDataNodeVolumeFailureToleration
ant test -Dtestcase=TestDataNodeVolumeFailureToleration
ant clean
ant test -Dtestcase=TestDataNodeVolumeFailureToleration
{code}

The logs show a proper failure condition being handled:
{code}

2011-06-30 23:53:01,431 WARN  datanode.DataNode 
(DataNode.java:getDataDirsFromURIs(2194)) - Invalid directory in: 
dfs.datanode.data.dir: 
java.io.FileNotFoundException: File 
file:/Users/harshchouraria/Work/code/apache/hadoop/hdfs/build/test/data/dfs/badData/data2/2
 does not exist.
        at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:424)
        at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:315)
        at 
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:131)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:148)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:2191)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2170)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2107)
        at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2074)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:884)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:771)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:929)
        at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration.testValidVolumesAtStartup(TestDataNodeVolumeFailureToleration.java:127)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
{code}

The DN, as expected, starts alright with just one dir.

> Add tests for ensuring that the DN will start with a few bad data directories 
> (Part 1 of testing DiskChecker)
> -------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-2111
>                 URL: https://issues.apache.org/jira/browse/HDFS-2111
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: data-node, test
>    Affects Versions: 0.23.0
>            Reporter: Harsh J
>            Assignee: Harsh J
>              Labels: test
>             Fix For: 0.23.0
>
>         Attachments: HDFS-2111.r1.diff
>
>
> Add tests to ensure that given multiple data dirs, if a single is bad, the DN 
> should still start up.
> This is to check DiskChecker's functionality used in instantiating DataNodes

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to