Its warning not error...

Create a directory and then do ls ( In your case /user/hduser is not
created untill and unless for the first time you create a directory or put
some file)

hadoop fs  -mkdir sample

hadoop fs  -ls

I would suggest if you are getting pemission problem,
please check the following:

(1) Have you run the command "hadoop namenode -format" with different user
and you are accessing the hdfs with different user?

On Tue, Jul 23, 2013 at 10:10 PM, <bejoy.had...@gmail.com> wrote:

> **
> Hi Ashish
>
> In your hdfs-site.xml within <configuration> tag you need to have the
> <property> tag and inside a <property> tag you can have <name>,<value> and
> <description> tags.
>
> Regards
> Bejoy KS
>
> Sent from remote device, Please excuse typos
> ------------------------------
> *From: * Ashish Umrani <ashish.umr...@gmail.com>
> *Date: *Tue, 23 Jul 2013 09:28:00 -0700
> *To: *<user@hadoop.apache.org>
> *ReplyTo: * user@hadoop.apache.org
> *Subject: *Re: New hadoop 1.2 single node installation giving problems
>
> Hey thanks for response.  I have changed 4 files during installation
>
> core-site.xml
> mapred-site.xml
> hdfs-site.xml   and
> hadoop-env.sh
>
>
> I could not find any issues except that all params in the hadoop-env.sh
> are commented out.  Only java_home is un commented.
>
> If you have a quick minute can you please browse through these files in
> email and let me know where could be the issue.
>
> Regards
> ashish
>
>
>
> I am listing those files below.
> *core-site.xml *
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/app/hadoop/tmp</value>
>     <description>A base for other temporary directories.</description>
>   </property>
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://localhost:54310</value>
>     <description>The name of the default file system.  A URI whose
>     scheme and authority determine the FileSystem implementation.  The
>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>     the FileSystem implementation class.  The uri's authority is used to
>     determine the host, port, etc. for a filesystem.</description>
>   </property>
> </configuration>
>
>
>
> *mapred-site.xml*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:54311</value>
>     <description>The host and port that the MapReduce job tracker runs
>     at.  If "local", then jobs are run in-process as a single map
>     and reduce task.
>     </description>
>   </property>
> </configuration>
>
>
>
> *hdfs-site.xml   and*
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>
> <!-- Put site-specific property overrides in this file. -->
>
> <configuration>
>   <name>dfs.replication</name>
>   <value>1</value>
>   <description>Default block replication.
>     The actual number of replications can be specified when the file is
> created.
>     The default is used if replication is not specified in create time.
>   </description>
> </configuration>
>
>
>
> *hadoop-env.sh*
> # Set Hadoop-specific environment variables here.
>
> # The only required environment variable is JAVA_HOME.  All others are
> # optional.  When running a distributed configuration it is best to
> # set JAVA_HOME in this file, so that it is correctly defined on
> # remote nodes.
>
> # The java implementation to use.  Required.
> export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25
>
> # Extra Java CLASSPATH elements.  Optional.
> # export HADOOP_CLASSPATH=
>
>
> All pther params in hadoop-env.sh are commented
>
>
>
>
>
>
>
>
> On Tue, Jul 23, 2013 at 8:38 AM, Jitendra Yadav <
> jeetuyadav200...@gmail.com> wrote:
>
>> Hi,
>>
>> You might have missed some configuration (XML tags ), Please check all
>> the Conf files.
>>
>> Thanks
>> On Tue, Jul 23, 2013 at 6:25 PM, Ashish Umrani 
>> <ashish.umr...@gmail.com>wrote:
>>
>>> Hi There,
>>>
>>> First of all, sorry if I am asking some stupid question.  Myself being
>>> new to the Hadoop environment , am finding it a bit difficult to figure out
>>> why its failing
>>>
>>> I have installed hadoop 1.2, based on instructions given in the
>>> folllowing link
>>>
>>> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>>>
>>> All went well and I could do the start-all.sh and the jps command does
>>> show all 5 process to be present.
>>>
>>> However when I try to do
>>>
>>> hadoop fs -ls
>>>
>>> I get the following error
>>>
>>>  hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> 13/07/23 05:55:06 WARN conf.Configuration: bad conf file: element not
>>> <property>
>>> ls: Cannot access .: No such file or directory.
>>> hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>>
>>>
>>>
>>> Can someone help me figure out whats the issue in my installation
>>>
>>>
>>> Regards
>>> ashish
>>>
>>
>>
>

Reply via email to