Hey guys!!

Any ideas on this one! I m still stuck with this!

I tried dropping the jar files into the lib. It still doesnt work.. The
following is how the lib looks after the new files were put in:

[EMAIL PROTECTED] hadoop-0.17.2.1]$ cd bin
[EMAIL PROTECTED] bin]$ ls
hadoop            hadoop-daemon.sh   rcc        start-all.sh
start-dfs.sh     stop-all.sh       stop-dfs.sh
hadoop-config.sh  hadoop-daemons.sh  slaves.sh  start-balancer.sh
 start-mapred.sh  stop-balancer.sh  stop-mapred.sh
[EMAIL PROTECTED] bin]$ cd ..
[EMAIL PROTECTED] hadoop-0.17.2.1]$ mv commons-logging-1.1.1/* lib
[EMAIL PROTECTED] hadoop-0.17.2.1]$ cd lib
[EMAIL PROTECTED] lib]$ ls
commons-cli-2.0-SNAPSHOT.jar  commons-logging-1.1.1-javadoc.jar
commons-logging-tests.jar  junit-3.8.1.jar          log4j-1.2.13.jar   site
commons-codec-1.3.jar         commons-logging-1.1.1-sources.jar
jets3t-0.5.0.jar           junit-3.8.1.LICENSE.txt  native
xmlenc-0.52.jar
commons-httpclient-3.0.1.jar  commons-logging-adapters-1.1.1.jar
 jetty-5.1.4.jar            kfs-0.1.jar              NOTICE.txt
commons-logging-1.0.4.jar     commons-logging-api-1.0.4.jar
jetty-5.1.4.LICENSE.txt    kfs-0.1.LICENSE.txt      RELEASE-NOTES.txt
commons-logging-1.1.1.jar     commons-logging-api-1.1.1.jar       jetty-ext
                 LICENSE.txt              servlet-api.jar
[EMAIL PROTECTED] lib]$ cd ..
[EMAIL PROTECTED] hadoop-0.17.2.1]$ bin/start-all.sh
starting namenode, logging to
/home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-namenode-node01.out
[EMAIL PROTECTED]'s password:
localhost: starting datanode, logging to
/home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-datanode-node01.out
[EMAIL PROTECTED]'s password:
localhost: starting secondarynamenode, logging to
/home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-secondarynamenode-node01.out
starting jobtracker, logging to
/home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-jobtracker-node01.out
[EMAIL PROTECTED]'s password:
localhost: starting tasktracker, logging to
/home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-tasktracker-node01.out
localhost: Exception in thread "main" java.lang.ExceptionInInitializerError
localhost: Caused by: org.apache.commons.logging.LogConfigurationException:
User-specified log class 'org.apache.commons.logging.impl.Log4JLogger'
cannot be found or is not useable.
localhost:      at
org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874)
localhost:      at
org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604)
localhost:      at
org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336)
localhost:      at
org.apache.commons.logging.LogFactory.getLog(LogFactory.java:704)
localhost:      at
org.apache.hadoop.mapred.TaskTracker.<clinit>(TaskTracker.java:95)
localhost: Could not find the main class:
org.apache.hadoop.mapred.TaskTracker.  Program will exit.
[EMAIL PROTECTED] hadoop-0.17.2.1]$ ls
bin        c++          commons-logging-1.1.1  contrib
 hadoop-0.17.2.1-core.jar      hadoop-0.17.2.1-test.jar  libhdfs      logs
     README.txt  webapps
build.xml  CHANGES.txt  conf                   docs
hadoop-0.17.2.1-examples.jar  lib                       LICENSE.txt
 NOTICE.txt  src
[EMAIL PROTECTED] hadoop-0.17.2.1]$ conf/hadoop namenode -format
bash: conf/hadoop: No such file or directory
[EMAIL PROTECTED] hadoop-0.17.2.1]$ bin/hadoop namenode -format
Exception in thread "main" java.lang.ExceptionInInitializerError
Caused by: org.apache.commons.logging.LogConfigurationException:
User-specified log class 'org.apache.commons.logging.impl.Log4JLogger'
cannot be found or is not useable.
        at
org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874)
        at
org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604)
        at
org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336)
        at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:704)
        at org.apache.hadoop.dfs.NameNode.<clinit>(NameNode.java:88)
Could not find the main class: org.apache.hadoop.dfs.NameNode.  Program will
exit.
[EMAIL PROTECTED] hadoop-0.17.2.1]$


Mithila


On Fri, Nov 21, 2008 at 9:22 PM, Alex Loddengaard <[EMAIL PROTECTED]> wrote:

> Download the 1.1.1.tar.gz binaries.  This file will have a bunch of JAR
> files; drop the JAR files in to $HADOOP_HOME/lib and see what happens.
> Alex
>
> On Fri, Nov 21, 2008 at 9:19 AM, Mithila Nagendra <[EMAIL PROTECTED]>
> wrote:
>
> > Hey ALex
> > Which file do I download from the apache commons website?
> >
> > Thanks
> > Mithila
> > On Fri, Nov 21, 2008 at 8:15 PM, Mithila Nagendra <[EMAIL PROTECTED]>
> > wrote:
> >
> > > I tried the 0.18.2 as welll.. it gave me the same exception.. so tried
> > the
> > > lower version.. I should check if this works.. Thanks!
> > >
> > >
> > > On Fri, Nov 21, 2008 at 5:06 AM, Alex Loddengaard <[EMAIL PROTECTED]
> > >wrote:
> > >
> > >> Maybe try downloading the Apache Commons - Logging jars (<
> > >> http://commons.apache.org/downloads/download_logging.cgi>) and drop
> > them
> > >> in
> > >> to $HADOOP_HOME/lib.
> > >> Just curious, if you're starting a new cluster, why have you chosen to
> > use
> > >> 0.17.* and not 0.18.2?  It would be a good idea to use 0.18.2 if
> > possible.
> > >>
> > >> Alex
> > >>
> > >> On Thu, Nov 20, 2008 at 4:36 PM, Mithila Nagendra <[EMAIL PROTECTED]>
> > >> wrote:
> > >>
> > >> > Hey
> > >> > The version is: Linux enpc3740.eas.asu.edu 2.6.9-67.0.20.EL #1 Wed
> > Jun
> > >> 18
> > >> > 12:23:46 EDT 2008 i686 i686 i386 GNU/Linux, this is what I got when
> I
> > >> used
> > >> > the command uname -a (thanks Tom!)
> > >> >
> > >> > Yea it is bin/start-all.. Following is the exception that I got when
> i
> > >> > tried
> > >> > to start the daemons..
> > >> >
> > >> >
> > >> > [EMAIL PROTECTED] mithila]$ ls
> > >> > hadoop-0.17.2.1  hadoop-0.18.2  hadoop-0.18.2.tar.gz
> > >> > [EMAIL PROTECTED] mithila]$ cd hadoop-0.17*
> > >> > [EMAIL PROTECTED] hadoop-0.17.2.1]$ ls
> > >> > bin        c++          conf     docs
> > >> >  hadoop-0.17.2.1-examples.jar  lib      LICENSE.txt  NOTICE.txt  src
> > >> > build.xml  CHANGES.txt  contrib  hadoop-0.17.2.1-core.jar
> > >> >  hadoop-0.17.2.1-test.jar      libhdfs  logs         README.txt
> >  webapps
> > >> > [EMAIL PROTECTED] hadoop-0.17.2.1]$ bin/start-all
> > >> > bash: bin/start-all: No such file or directory
> > >> > [EMAIL PROTECTED] hadoop-0.17.2.1]$ bin/start-all.sh
> > >> > starting namenode, logging to
> > >> >
> > >> >
> > >>
> >
> /home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-namenode-node01.out
> > >> > [EMAIL PROTECTED]'s password:
> > >> > localhost: starting datanode, logging to
> > >> >
> > >> >
> > >>
> >
> /home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-datanode-node01.out
> > >> > [EMAIL PROTECTED]'s password:
> > >> > localhost: starting secondarynamenode, logging to
> > >> >
> > >> >
> > >>
> >
> /home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-secondarynamenode-node01.out
> > >> > starting jobtracker, logging to
> > >> >
> > >> >
> > >>
> >
> /home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-jobtracker-node01.out
> > >> > [EMAIL PROTECTED]'s password:
> > >> > localhost: starting tasktracker, logging to
> > >> >
> > >> >
> > >>
> >
> /home/mithila/hadoop-0.17.2.1/bin/../logs/hadoop-mithila-tasktracker-node01.out
> > >> > localhost: Exception in thread "main"
> > >> java.lang.ExceptionInInitializerError
> > >> > localhost: Caused by:
> > >> org.apache.commons.logging.LogConfigurationException:
> > >> > User-specified log class
> 'org.apache.commons.logging.impl.Log4JLogger'
> > >> > cannot be found or is not useable.
> > >> > localhost:      at
> > >> >
> > >> >
> > >>
> >
> org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874)
> > >> > localhost:      at
> > >> >
> > >> >
> > >>
> >
> org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604)
> > >> > localhost:      at
> > >> >
> > >> >
> > >>
> >
> org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336)
> > >> > localhost:      at
> > >> > org.apache.commons.logging.LogFactory.getLog(LogFactory.java:704)
> > >> > localhost:      at
> > >> > org.apache.hadoop.mapred.TaskTracker.<clinit>(TaskTracker.java:95)
> > >> > localhost: Could not find the main class:
> > >> > org.apache.hadoop.mapred.TaskTracker.  Program will exit.
> > >> >
> > >> > AND when I tried formatting the file system I got the following
> > >> exception..
> > >> > I followed Michael Noll s step to install Hadoop.. I m currently
> > working
> > >> on
> > >> > a single node and if this works will move on to multiple nodes in a
> > >> > cluster.
> > >> >
> > >> > [EMAIL PROTECTED] hadoop-0.17.2.1]$ bin/hadoop namenode -format
> > >> > Exception in thread "main" java.lang.ExceptionInInitializerError
> > >> > Caused by: org.apache.commons.logging.LogConfigurationException:
> > >> > User-specified log class
> 'org.apache.commons.logging.impl.Log4JLogger'
> > >> > cannot be found or is not useable.
> > >> >        at
> > >> >
> > >> >
> > >>
> >
> org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:874)
> > >> >        at
> > >> >
> > >> >
> > >>
> >
> org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:604)
> > >> >        at
> > >> >
> > >> >
> > >>
> >
> org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:336)
> > >> >        at
> > >> org.apache.commons.logging.LogFactory.getLog(LogFactory.java:704)
> > >> >        at org.apache.hadoop.dfs.NameNode.<clinit>(NameNode.java:88)
> > >> > Could not find the main class: org.apache.hadoop.dfs.NameNode.
> >  Program
> > >> > will
> > >> > exit.
> > >> >
> > >> >
> > >> > I have no idea whats wrong... my hadoop-xml file looks as follows:
> > >> >
> > >> > <?xml version="1.0"?>
> > >> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> > >> >
> > >> > <!-- Put site-specific property overrides in this file. -->
> > >> >
> > >> > <configuration>
> > >> >
> > >> > <property>
> > >> > <name>hadoop.tmp.dir</name>
> > >> > <value>/tmp/hadoop-${user.name}</value>
> > >> > <description>A base for other temporary directories</description>
> > >> > </property>
> > >> >
> > >> >
> > >> > <property>
> > >> > <name>fs.default.name</name>
> > >> > <value>hdfs://localhost:54310</value>
> > >> > <description>The name of the default file system. A URI whose
> > >> > scheme and authority determine the FileSystem implementation. The
> > >> > URI's scheme determines the config property (fs.scheme.impl) naming
> > >> > the FileSystem implementation class. The URI's authority is used to
> > >> > determine the host, port, etc for a filesystem.</description>
> > >> > </property>
> > >> >
> > >> >
> > >> > <property>
> > >> > <name>mapred.job.tracker</name>
> > >> > <value>localhost:54311</value>
> > >> > <description>The host and port that the MapReduce job tracker runs
> at.
> > >> > If "local", then jobs are run in-process as a single map and
> > >> > reduce task.</description>
> > >> > </property>
> > >> >
> > >> >
> > >> > <property>
> > >> > <name>dfs.replication</name>
> > >> > <value>1</value>
> > >> > <description>Default block replication.
> > >> > The actual number of replications can be specified when the file is
> > >> > created.
> > >> > The default is used if replication is not specified in create
> > >> > time.</description>
> > >> > </property>
> > >> > "conf/hadoop-site.xml" 42L, 1271C
> > >> >
> > >> >
> > >> > My hadoop-env.sh looks as follows:
> > >> >
> > >> > # Set Hadoop-specific environment variables here.
> > >> >
> > >> > # The only required environment variable is JAVA_HOME.  All others
> are
> > >> > # optional.  When running a distributed configuration it is best to
> > >> > # set JAVA_HOME in this file, so that it is correctly defined on
> > >> > # remote nodes.
> > >> >
> > >> > # The java implementation to use.  Required.
> > >> >  export JAVA_HOME=/usr/java/jdk1.6.0_10
> > >> >
> > >> > # Extra Java CLASSPATH elements.  Optional.
> > >> > # export HADOOP_CLASSPATH=
> > >> >
> > >> > # The maximum amount of heap to use, in MB. Default is 1000.
> > >> > # export HADOOP_HEAPSIZE=2000
> > >> >
> > >> > # Extra Java runtime options.  Empty by default.
> > >> > # export HADOOP_OPTS=-server
> > >> >
> > >> > # Command specific options appended to HADOOP_OPTS when specified
> > >> > export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote
> > >> > $HADOOP_NAMENODE_OPTS"
> > >> > export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote
> > >> > $HADOOP_SECONDARYNAMENODE_OPTS"
> > >> > export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote
> > >> > $HADOOP_DATANODE_OPTS"
> > >> > export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote
> > >> > $HADOOP_BALANCER_OPTS"
> > >> > export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote
> > >> > $HADOOP_JOBTRACKER_OPTS"
> > >> > # export HADOOP_TASKTRACKER_OPTS=
> > >> > # The following applies to multiple commands (fs, dfs, fsck, distcp
> > etc)
> > >> > # export HADOOP_CLIENT_OPTS
> > >> >
> > >> > # Extra ssh options.  Empty by default.
> > >> > # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o
> > >> SendEnv=HADOOP_CONF_DIR"
> > >> >
> > >> > # Where log files are stored.  $HADOOP_HOME/logs by default.
> > >> > # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
> > >> >
> > >> > # File naming remote slave hosts.  $HADOOP_HOME/conf/slaves by
> > default.
> > >> > # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
> > >> >
> > >> > # host:path where hadoop code should be rsync'd from.  Unset by
> > default.
> > >> > # export HADOOP_MASTER=master:/home/$USER/src/hadoop
> > >> >
> > >> > "conf/hadoop-env.sh" 54L, 2236C
> > >> >
> > >> > Dont know what the exceptions mean.. Does anyone have an idea?
> > >> >
> > >> > THanks
> > >> > Mithila
> > >> >
> > >> >
> > >> > On Thu, Nov 20, 2008 at 6:42 AM, some speed <[EMAIL PROTECTED]>
> > >> wrote:
> > >> >
> > >> > > Hi,
> > >> > >
> > >> > > I am working on the same for my master's project and i know how
> > >> > frustrating
> > >> > > it can be to get hadoop installed.
> > >> > > If time is not a factor, I suggest you first try implementing it
> in
> > a
> > >> > > psuedo distributed environment. Once you understand how things
> work
> > by
> > >> > > implementing a simple map reduce program, u can easily move on to
> a
> > >> > cluster.
> > >> > >
> > >> > > From what little i know, let me tell u a few things,
> > >> > >
> > >> > > I tried using the university network to install hadoop.. it was a
> > real
> > >> > > pain. mayb it was coz I didnt have the admin privileges( to
> install
> > >> HDFS
> > >> > n
> > >> > > its files). so make sure u have admin rights or u keep getting an
> > >> error
> > >> > of
> > >> > > port 22 (for ssh) being not opened or the demeons were not
> started.
> > >> > > n btw is it conf/start-all.sh?? i think its bin/start -all.sh or
> > >> > something
> > >> > > of that sort.
> > >> > >
> > >> > > hadoop-site.xml  -- i had the links bookmarked somewhere- cant
> find
> > it
> > >> > now
> > >> > > but i think u are supposed to have a few more details in there for
> a
> > >> > cluster
> > >> > > installation. Am  sure we can find those online quite easily.
> > >> > >
> > >> > > Also i suppose u are using java? if u are good at eclipse, then u
> > can
> > >> > > implement map reduce/hadoop thru that on a single node (just to
> get
> > a
> > >> > hang
> > >> > > of it).
> > >> > >
> > >> > > All the best!
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > On Wed, Nov 19, 2008 at 6:38 PM, Tom Wheeler <[EMAIL PROTECTED]>
> > >> wrote:
> > >> > >
> > >> > >> On Wed, Nov 19, 2008 at 5:31 PM, Mithila Nagendra <
> > [EMAIL PROTECTED]>
> > >> > >> wrote:
> > >> > >> > Oh is that so. Im not sure which UNIX it is since Im working
> with
> > a
> > >> > >> cluster
> > >> > >> > that is remotely accessed.
> > >> > >>
> > >> > >> If you can get a shell on the machine, try typing "uname -a" to
> see
> > >> > >> what type of UNIX it is.
> > >> > >>
> > >> > >> Alternatively, the os.name, os.version and os.arch Java system
> > >> > >> properties could also help you to identify the operating system.
> > >> > >>
> > >> > >> --
> > >> > >> Tom Wheeler
> > >> > >> http://www.tomwheeler.com/
> > >> > >>
> > >> > >
> > >> > >
> > >> >
> > >>
> > >
> > >
> >
>

Reply via email to