I do ...

 $ ls -l /cs/student/mark/tmp/hodhod
total 4
drwxr-xr-x 3 mark grad 4096 May 24 21:10 dfs

and ..

$ ls -l /tmp/hadoop-mark
total 4
drwxr-xr-x 3 mark grad 4096 May 24 21:10 dfs

$ ls -l /tmp/hadoop-maha/dfs/name/       <<<< only name is created here no
data

Thanks agian,
Mark

On Tue, May 24, 2011 at 9:26 PM, Mapred Learn <mapred.le...@gmail.com>wrote:

> Do u Hv right permissions on the new dirs ?
> Try stopping n starting cluster...
>
> -JJ
>
> On May 24, 2011, at 9:13 PM, Mark question <markq2...@gmail.com> wrote:
>
> > Well, you're right  ... moving it to hdfs-site.xml had an effect at
> least.
> > But now I'm in the NameSpace incompatable error:
> >
> > WARN org.apache.hadoop.hdfs.server.common.Util: Path
> > /tmp/hadoop-mark/dfs/data should be specified as a URI in configuration
> > files. Please update hdfs configuration.
> > java.io.IOException: Incompatible namespaceIDs in
> /tmp/hadoop-mark/dfs/data
> >
> > My configuration for this part in hdfs-site.xml:
> > <configuration>
> > <property>
> >    <name>dfs.data.dir</name>
> >    <value>/tmp/hadoop-mark/dfs/data</value>
> > </property>
> > <property>
> >    <name>dfs.name.dir</name>
> >    <value>/tmp/hadoop-mark/dfs/name</value>
> > </property>
> > <property>
> >    <name>hadoop.tmp.dir</name>
> >    <value>/cs/student/mark/tmp/hodhod</value>
> > </property>
> > </configuration>
> >
> > The reason why I want to change hadoop.tmp.dir is because the student
> quota
> > under /tmp is small so I wanted to mount on /cs/student instead for
> > hadoop.tmp.dir.
> >
> > Thanks,
> > Mark
> >
> > On Tue, May 24, 2011 at 7:25 PM, Joey Echeverria <j...@cloudera.com>
> wrote:
> >
> >> Try moving the the configuration to hdfs-site.xml.
> >>
> >> One word of warning, if you use /tmp to store your HDFS data, you risk
> >> data loss. On many operating systems, files and directories in /tmp
> >> are automatically deleted.
> >>
> >> -Joey
> >>
> >> On Tue, May 24, 2011 at 10:22 PM, Mark question <markq2...@gmail.com>
> >> wrote:
> >>> Hi guys,
> >>>
> >>> I'm using an NFS cluster consisting of 30 machines, but only specified
> 3
> >> of
> >>> the nodes to be my hadoop cluster. So my problem is this. Datanode
> won't
> >>> start in one of the nodes because of the following error:
> >>>
> >>> org.apache.hadoop.hdfs.server.
> >>> common.Storage: Cannot lock storage
> /cs/student/mark/tmp/hodhod/dfs/data.
> >>> The directory is already locked
> >>>
> >>> I think it's because of the NFS property which allows one node to lock
> it
> >>> then the second node can't lock it. So I had to change the following
> >>> configuration:
> >>>      dfs.data.dir to be "/tmp/hadoop-user/dfs/data"
> >>>
> >>> But this configuration is overwritten by ${hadoop.tmp.dir}/dfs/data
> where
> >> my
> >>> hadoop.tmp.dir = " /cs/student/mark/tmp" as you might guess from above.
> >>>
> >>> Where is this configuration over-written ? I thought my core-site.xml
> has
> >>> the final configuration values.
> >>> Thanks,
> >>> Mark
> >>>
> >>
> >>
> >>
> >> --
> >> Joseph Echeverria
> >> Cloudera, Inc.
> >> 443.305.9434
> >>
>

Reply via email to