Try moving the the configuration to hdfs-site.xml.

One word of warning, if you use /tmp to store your HDFS data, you risk
data loss. On many operating systems, files and directories in /tmp
are automatically deleted.

-Joey

On Tue, May 24, 2011 at 10:22 PM, Mark question <markq2...@gmail.com> wrote:
> Hi guys,
>
> I'm using an NFS cluster consisting of 30 machines, but only specified 3 of
> the nodes to be my hadoop cluster. So my problem is this. Datanode won't
> start in one of the nodes because of the following error:
>
> org.apache.hadoop.hdfs.server.
> common.Storage: Cannot lock storage /cs/student/mark/tmp/hodhod/dfs/data.
> The directory is already locked
>
> I think it's because of the NFS property which allows one node to lock it
> then the second node can't lock it. So I had to change the following
> configuration:
>       dfs.data.dir to be "/tmp/hadoop-user/dfs/data"
>
> But this configuration is overwritten by ${hadoop.tmp.dir}/dfs/data where my
> hadoop.tmp.dir = " /cs/student/mark/tmp" as you might guess from above.
>
> Where is this configuration over-written ? I thought my core-site.xml has
> the final configuration values.
> Thanks,
> Mark
>



-- 
Joseph Echeverria
Cloudera, Inc.
443.305.9434

Reply via email to