It thinks about the redundant configuration of the name node that used 
keepalived (active/standby). (Refer to attached file redundant.gif and 
configuration file hadoop-site.xml. )

The NFS mount does the directory "/dfs/name" on the standby machine to "
/dfs/namerep" set to dfs.name.dir in the configuration file. 
As a result, it thinks about making of file system metadata tedious. 

It became a result like attached file result.gif, and the file was not 
complete though the 
failover had been executed while writing the data of 100MB by using the 
sample program. 


Hereafter, reproduction procedure. 

1.The sample program is executed. 
java sample.HadoopWriteToDFS 104857600 /dfs/users/kajiwara/test.data

2.It is kill as for namenode process of the active machine. 
(Then, start-dfs.sh starts on the standby machine. )

After a while, the file is not complete as the above-mentioned though 
the command ends normally. 

I want you to teach if there is a method of solving above-mentioned 
issue. 

H.KAJIWARA
[EMAIL PROTECTED]
<?xml version="1.0"?>
<configuration>
    <property>
        <name>fs.listen.address</name>
        <value>0.0.0.0:9000</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
<!--for active Namenode-->
        <value>/dfs/name,/dfs/namerep</value>
<!--for standby Namenode
        <value>/dfs/name</value>
-->
        <description>Determines where on the local filesystem the DFS name node
        should store the name table.  If this is a comma-delimited list
        of directories then the name table is replicated in all of the
        directories, for redundancy. </description>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://dfs-master:9000</value>
        <description>The name of the default file system.  A URI whose
        scheme and authority determine the FileSystem implementation.  The
        uri's scheme determines the config property (fs.SCHEME.impl) naming
        the FileSystem implementation class.  The uri's authority is used to
        determine the host, port, etc. for a filesystem.</description>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/dfs/export</value>
        <description>Determines where on the local filesystem an DFS data node
        should store its blocks.  If this is a comma-delimited
        list of directories, then data will be stored in all named
        directories, typically on different devices.
        Directories that do not exist are ignored.
        </description>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
        <description>Default block replication.
        The actual number of replications can be specified when the file is created.
        The default is used if replication is not specified in create time.
        </description>
    </property>
    <property>
        <name>dfs.permissions.supergroup</name>
        <value>dfs</value>
        <description>The name of the group of super-users.</description>
    </property>

<!--for MapReduce-->
    <property>
        <name>mapred.job.tracker</name>
        <value>dfs-master:10000</value>
        <description>The host and port that the MapReduce job tracker runs
        at.  If "local", then jobs are run in-process as a single map
        and reduce task.
        </description>
    </property>
    <property>
        <name>mapred.system.dir</name>
        <value>/mapred/system</value>
        <description>The shared directory where MapReduce stores control files.
        </description>
    </property>
</configuration>

Reply via email to