Re: Why I cannot delete all the nameNode metadata?

2014-10-08 Thread ViSolve Hadoop Support

Hello,

HDFS default location /user, you can't delete the home directory for 
hdfs. If you create a file or directory, it will be created under /user.


For example: hdfs dfs -mkdir name

Regards,
ViSolve Hadoop Support

On 10/8/2014 10:44 AM, Tianyin Xu wrote:

The former, I use
#hdfs dfs -ls

and I can see the directory /user

(and that's why I cannot use hdfs dfs -mkdir to create a new one)

~t

On Tue, Oct 7, 2014 at 9:06 PM, Azuryy Yu azury...@gmail.com 
mailto:azury...@gmail.com wrote:


To make sure your dfs.namenode.name.dir is by default.
then, how did you find /user exists? hdfs dfs -ls ? or you checked
dfs.datanode.data.dir?  if the latter, then don't worry.


On Wed, Oct 8, 2014 at 11:56 AM, Tianyin Xu t...@cs.ucsd.edu
mailto:t...@cs.ucsd.edu wrote:

Hi,

I wanna run some experiments on Hadoop which requires a clean,
initial system state of HDFS for every job execution, i.e.,
the HDFS should be formatted and contain nothing.

I keep /dfs.datanode.data.dir/ and /dfs.namenode.name.dir/ the
default, which are located in /tmp

Every time before running a job,

1. I first delete dfs.datanode.data.dir and dfs.namenode.name.dir
#rm -Rf /tmp/hadoop-tianyin*

2. Then I format the nameNode
#bin/hdfs namenode -format

3. Start HDFS
sbin/start-dfs.sh

4. However, I still find the previous metadata (e.g., the
directory I previously created) in HDFS, for example,
#bin/hdfs dfs -mkdir /user
mkdir: `/user': File exists

Could anyone tell me what I missed or misunderstood? Why I can
still see the old data after both physically delete the
directories and reformat the HDFS nameNode?

Thanks a lot for your help!
Tianyin





--
Tianyin XU,
http://cseweb.ucsd.edu/~tixu/ http://cseweb.ucsd.edu/%7Etixu/




Re: Why I cannot delete all the nameNode metadata?

2014-10-07 Thread Azuryy Yu
To make sure your dfs.namenode.name.dir is by default.
then, how did you find /user exists? hdfs dfs -ls ? or you checked
dfs.datanode.data.dir?
 if the latter, then don't worry.


On Wed, Oct 8, 2014 at 11:56 AM, Tianyin Xu t...@cs.ucsd.edu wrote:

 Hi,

 I wanna run some experiments on Hadoop which requires a clean, initial
 system state of HDFS for every job execution, i.e., the HDFS should be
 formatted and contain nothing.

 I keep *dfs.datanode.data.dir* and *dfs.namenode.name.dir* the default,
 which are located in /tmp

 Every time before running a job,

 1. I first delete  dfs.datanode.data.dir and dfs.namenode.name.dir
 #rm -Rf /tmp/hadoop-tianyin*

 2. Then I format the nameNode
 #bin/hdfs namenode -format

 3. Start HDFS
 sbin/start-dfs.sh

 4. However, I still find the previous metadata (e.g., the directory I
 previously created) in HDFS, for example,
 #bin/hdfs dfs -mkdir /user
 mkdir: `/user': File exists

 Could anyone tell me what I missed or misunderstood? Why I can still see
 the old data after both physically delete the directories and reformat the
 HDFS nameNode?

 Thanks a lot for your help!
 Tianyin



Re: Why I cannot delete all the nameNode metadata?

2014-10-07 Thread Tianyin Xu
The former, I use
#hdfs dfs -ls

and I can see the directory /user

(and that's why I cannot use hdfs dfs -mkdir to create a new one)

~t

On Tue, Oct 7, 2014 at 9:06 PM, Azuryy Yu azury...@gmail.com wrote:

 To make sure your dfs.namenode.name.dir is by default.
 then, how did you find /user exists? hdfs dfs -ls ? or you checked 
 dfs.datanode.data.dir?
  if the latter, then don't worry.


 On Wed, Oct 8, 2014 at 11:56 AM, Tianyin Xu t...@cs.ucsd.edu wrote:

 Hi,

 I wanna run some experiments on Hadoop which requires a clean, initial
 system state of HDFS for every job execution, i.e., the HDFS should be
 formatted and contain nothing.

 I keep *dfs.datanode.data.dir* and *dfs.namenode.name.dir* the default,
 which are located in /tmp

 Every time before running a job,

 1. I first delete  dfs.datanode.data.dir and dfs.namenode.name.dir
 #rm -Rf /tmp/hadoop-tianyin*

 2. Then I format the nameNode
 #bin/hdfs namenode -format

 3. Start HDFS
 sbin/start-dfs.sh

 4. However, I still find the previous metadata (e.g., the directory I
 previously created) in HDFS, for example,
 #bin/hdfs dfs -mkdir /user
 mkdir: `/user': File exists

 Could anyone tell me what I missed or misunderstood? Why I can still see
 the old data after both physically delete the directories and reformat the
 HDFS nameNode?

 Thanks a lot for your help!
 Tianyin





-- 
Tianyin XU,
http://cseweb.ucsd.edu/~tixu/