Re: how to check hdfs

2015-03-03 Thread Shengdi Jin
I use command
./hdfs dfs -ls hdfs://master:9000/
It works. So i think hdfs://master:9000/ should be the hdfs.

I have another questions, if
./hdfs dfs -mkdir hdfs://master:9000/directory
where should the /directory be stored?
In DataNode or in NameNode? or in the local system of master?

On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 yangha...@gmail.com wrote:

 I don't think it nessary to run the command with daemon in that client,
 and hdfs is not a daemon for hadoop。

 2015-03-03 20:57 GMT+08:00 Somnath Pandeya somnath_pand...@infosys.com:

  Is your hdfs daemon running on cluster. ? ?



 *From:* Vikas Parashar [mailto:para.vi...@gmail.com]
 *Sent:* Tuesday, March 03, 2015 10:33 AM
 *To:* user@hadoop.apache.org
 *Subject:* Re: how to check hdfs



 Hi,



 Kindly install hadoop-hdfs rpm in your machine..



 Rg:

 Vicky



 On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin jinshen...@gmail.com
 wrote:

  Hi all,

 I just start to learn hadoop, I have a naive question

 I used

 hdfs dfs -ls /home/cluster

 to check the content inside.

 But I get error
 ls: No FileSystem for scheme: hdfs

 My configuration file core-site.xml is like
 configuration
 property
   namefs.defaultFS/name
   valuehdfs://master:9000/value
 /property
 /configuration


 hdfs-site.xml is like
 configuration
 property
namedfs.replication/name
value2/value
 /property
 property
namedfs.name.dir/name
valuefile:/home/cluster/mydata/hdfs/namenode/value
 /property
 property
namedfs.data.dir/name
valuefile:/home/cluster/mydata/hdfs/datanode/value
 /property
 /configuration

 is there any thing wrong ?

 Thanks a lot.



  CAUTION - Disclaimer *
 This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
 for the use of the addressee(s). If you are not the intended recipient, 
 please
 notify the sender by e-mail and delete the original message. Further, you 
 are not
 to copy, disclose, or distribute this e-mail or its contents to any other 
 person and
 any such actions are unlawful. This e-mail may contain viruses. Infosys has 
 taken
 every reasonable precaution to minimize this risk, but is not liable for any 
 damage
 you may sustain as a result of any virus in this e-mail. You should carry 
 out your
 own virus checks before opening the e-mail or attachment. Infosys reserves 
 the
 right to monitor and review the content of all messages sent to or from this 
 e-mail
 address. Messages sent to or from this e-mail address may be stored on the
 Infosys e-mail system.
 ***INFOSYS End of Disclaimer INFOSYS***





configure a backup namenode

2015-03-03 Thread Shengdi Jin
Hi all,

I have a small cluster with one namenode1 and one datanode.

I want to configure another namenode2 to replace the namenode1 by only
replicating the files in namenode directory of namenode1 to namenode2 and
changing IP of namenode2 to namenode1's.

I tried this, the replacing namenode2 can work with datanode, like
start/stop hdfs, create directory, delete directory, a new mapreduce job

But when I want to check the directories and files created by namenode1,
nothing found.

So i suspect that the blocking-mapping information is not included in
namenode directory of namenode1.

Am I right? Does anyone know how the namenode manages the mapping block
information. Please give me some ideas.

If I am wrong. Please correct me. Thanks a looot.

Shengdi


Re: how to check hdfs

2015-03-03 Thread Shengdi Jin
Thanks Vikas.

I run ./hdfs dfs -ls /home/cluster  on machine running namenode.
Do I need to configure a client machine?

In my opinion, I suspect that the local fs /home/cluster is not configured
as hdfs.
In core-site.xml,  I set the hdfs as hdfs://master:9000.
So I think that's why the command ./hdfs dfs-ls hdfs://master:9000/ can
work.

Please correct me, if i was wrong.

On Tue, Mar 3, 2015 at 1:59 PM, Vikas Parashar para.vi...@gmail.com wrote:

 Hello,

   hdfs dfs -ls /home/cluster
 to check the content inside.
 But I get error
 ls: *No FileSystem for scheme: hdfs  -- *that means, you don't have
 hdfs rpm installed at your client machine..


 For answer of you question:-
 ./hdfs dfs -mkdir hdfs://master:9000/directory


 That *directory *will be under / in your hdfs. All data would be stored
 in data node; but namenode will have the meta data information. For more
 details; you have to read hdfs
 http://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.
















 On Tue, Mar 3, 2015 at 10:46 PM, Shengdi Jin jinshen...@gmail.com wrote:

 I use command
 ./hdfs dfs -ls hdfs://master:9000/
 It works. So i think hdfs://master:9000/ should be the hdfs.

 I have another questions, if
 ./hdfs dfs -mkdir hdfs://master:9000/directory
 where should the /directory be stored?
 In DataNode or in NameNode? or in the local system of master?

 On Tue, Mar 3, 2015 at 8:06 AM, 杨浩 yangha...@gmail.com wrote:

 I don't think it nessary to run the command with daemon in that client,
 and hdfs is not a daemon for hadoop。

 2015-03-03 20:57 GMT+08:00 Somnath Pandeya somnath_pand...@infosys.com
 :

  Is your hdfs daemon running on cluster. ? ?



 *From:* Vikas Parashar [mailto:para.vi...@gmail.com]
 *Sent:* Tuesday, March 03, 2015 10:33 AM
 *To:* user@hadoop.apache.org
 *Subject:* Re: how to check hdfs



 Hi,



 Kindly install hadoop-hdfs rpm in your machine..



 Rg:

 Vicky



 On Mon, Mar 2, 2015 at 11:19 PM, Shengdi Jin jinshen...@gmail.com
 wrote:

  Hi all,

 I just start to learn hadoop, I have a naive question

 I used

 hdfs dfs -ls /home/cluster

 to check the content inside.

 But I get error
 ls: No FileSystem for scheme: hdfs

 My configuration file core-site.xml is like
 configuration
 property
   namefs.defaultFS/name
   valuehdfs://master:9000/value
 /property
 /configuration


 hdfs-site.xml is like
 configuration
 property
namedfs.replication/name
value2/value
 /property
 property
namedfs.name.dir/name
valuefile:/home/cluster/mydata/hdfs/namenode/value
 /property
 property
namedfs.data.dir/name
valuefile:/home/cluster/mydata/hdfs/datanode/value
 /property
 /configuration

 is there any thing wrong ?

 Thanks a lot.



  CAUTION - Disclaimer *
 This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended 
 solely
 for the use of the addressee(s). If you are not the intended recipient, 
 please
 notify the sender by e-mail and delete the original message. Further, you 
 are not
 to copy, disclose, or distribute this e-mail or its contents to any other 
 person and
 any such actions are unlawful. This e-mail may contain viruses. Infosys 
 has taken
 every reasonable precaution to minimize this risk, but is not liable for 
 any damage
 you may sustain as a result of any virus in this e-mail. You should carry 
 out your
 own virus checks before opening the e-mail or attachment. Infosys reserves 
 the
 right to monitor and review the content of all messages sent to or from 
 this e-mail
 address. Messages sent to or from this e-mail address may be stored on the
 Infosys e-mail system.
 ***INFOSYS End of Disclaimer INFOSYS***







how to check hdfs

2015-03-02 Thread Shengdi Jin
Hi all,
I just start to learn hadoop, I have a naive question

I used
hdfs dfs -ls /home/cluster
to check the content inside.
But I get error
ls: No FileSystem for scheme: hdfs

My configuration file core-site.xml is like
configuration
property
  namefs.defaultFS/name
  valuehdfs://master:9000/value
/property
/configuration

hdfs-site.xml is like
configuration
property
   namedfs.replication/name
   value2/value
/property
property
   namedfs.name.dir/name
   valuefile:/home/cluster/mydata/hdfs/namenode/value
/property
property
   namedfs.data.dir/name
   valuefile:/home/cluster/mydata/hdfs/datanode/value
/property
/configuration

is there any thing wrong ?

Thanks a lot.