Is /cs/student/mark/ on the *shared* NFS volume you mentioned in your
original post? In that case all nodes would be trying to use the exact same
directory.
Luca
On May 25, 2011 08:22:50 Mark question wrote:
> I do ...
>
> $ ls -l /cs/student/mark/tmp/hodhod
> total 4
> drwxr-xr-x 3 mark
I do ...
$ ls -l /cs/student/mark/tmp/hodhod
total 4
drwxr-xr-x 3 mark grad 4096 May 24 21:10 dfs
and ..
$ ls -l /tmp/hadoop-mark
total 4
drwxr-xr-x 3 mark grad 4096 May 24 21:10 dfs
$ ls -l /tmp/hadoop-maha/dfs/name/ only name is created here no
data
Thanks agian,
Mark
On Tue, Ma
Do u Hv right permissions on the new dirs ?
Try stopping n starting cluster...
-JJ
On May 24, 2011, at 9:13 PM, Mark question wrote:
> Well, you're right ... moving it to hdfs-site.xml had an effect at least.
> But now I'm in the NameSpace incompatable error:
>
> WARN org.apache.hadoop.hdfs.s
Well, you're right ... moving it to hdfs-site.xml had an effect at least.
But now I'm in the NameSpace incompatable error:
WARN org.apache.hadoop.hdfs.server.common.Util: Path
/tmp/hadoop-mark/dfs/data should be specified as a URI in configuration
files. Please update hdfs configuration.
java.io.
Try moving the the configuration to hdfs-site.xml.
One word of warning, if you use /tmp to store your HDFS data, you risk
data loss. On many operating systems, files and directories in /tmp
are automatically deleted.
-Joey
On Tue, May 24, 2011 at 10:22 PM, Mark question wrote:
> Hi guys,
>
> I'
Hi guys,
I'm using an NFS cluster consisting of 30 machines, but only specified 3 of
the nodes to be my hadoop cluster. So my problem is this. Datanode won't
start in one of the nodes because of the following error:
org.apache.hadoop.hdfs.server.
common.Storage: Cannot lock storage /cs/student/ma