Try starting it as a regular, non-root user. This is an issue with
0.20.205 scripts, perhaps fixed in more recent releases, but you do
not want to be running Hadoop as root anyway.

On Tue, Jan 24, 2012 at 10:22 PM, Fei Dong <dongfei...@gmail.com> wrote:
> When I install hadoop-0.20.205.0 , the namenode can not start.
>
> [root@ip-10-114-45-186 logs]# /usr/local/hadoop-0.20.205.0/bin/start-dfs.sh
> starting namenode, logging to
> /mnt/hadoop/logs/hadoop-root-namenode-ip-10-114-45-186.out
> ip-10-12-55-242.ec2.internal: starting datanode, logging to
> /mnt/hadoop/logs/hadoop-root-datanode-ip-10-12-55-242.out
> ip-10-12-55-242.ec2.internal: Unrecognized option: -jvm
> ip-10-12-55-242.ec2.internal: Could not create the Java virtual machine.
>
> I did not find the place of "-jvm" in config file. Do you misconfig something?
>
> On Tue, Jan 24, 2012 at 11:11 AM, Fei Dong <dongfei...@gmail.com> wrote:
>> Thanks Stack,
>>
>> On Tue, Jan 24, 2012 at 1:07 AM, Stack <st...@duboce.net> wrote:
>>> On Mon, Jan 23, 2012 at 5:32 PM, Fei Dong <dongfei...@gmail.com> wrote:
>>>> Hello guys,
>>>>
>>>> I setup a Hadoop and HBase in EC2. My Settings as follows:
>>>> Apache Official Version
>>>> Hadoop 0.20.203.0
>>>
>>> HBase won't work on this version of hadoop.  See
>>> http://hbase.apache.org/book.html#hadoop
>>>
>> It says only Hadoop 0.20.205.x can match?
>>
>>>
>>>> export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/lib/zookeeper.jar"
>>>> export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/hbase.jar"
>>>>
>>>
>>> The jars are not normally named as you have them above.  Usually there
>>> is a version on the jar name.
>>>
>> I soft-linked the zookeeper.jar and hbase to the version ones.
>>
>>>
>>>> org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to
>>>> connect to ZooKeeper but the connection closes immediately. This could
>>>> be a sign that the server has too many connections (30 is the
>>>> default). Consider inspecting your ZK server logs for that error and
>>>> then make sure you are reusing HBaseConfiguration as often as you can.
>>>> See HTable's javadoc for more information.
>>>>        at 
>>>> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.<init>(ZooKeeperWatcher.java:155)
>>>>        at 
>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1002)
>>>>        at 
>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:304)
>>>>        at 
>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:295)
>>>>        at 
>>>> org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:157)
>>>> """
>>>>
>>>
>>> Search this mailing list archive for similar reports to above.    Up
>>> your maximum count of concurrent zookeeper connections as work around.
>>>
>> I did not run any application before, so it should not have concurrent
>> problem. Then I set it in hbase-site.xml, it still reports such error.
>>    <name>hbase.zookeeper.property.maxClientCnxns</name>
>>    <value>1000</value>
>>
>>>
>>>> 2)
>>>> When another mapreduce job:
>>>>
>>>> /usr/local/hadoop-0.20.203.0/bin/hadoop jar
>>>> ./bin/../dist/xxxxxx.jar pMapReduce.SmartRunner -numReducers
>>>> 80 -inDir /root/test1/input -outDir /root/test1/output -landmarkTable
>>>> Landmarks -resultsTable test_one -numIter 10 -maxLatency 75
>>>> -filterMinDist 10 -hostAnswerWeight 5 -minNumLandmarks 1 -minNumMeas 1
>>>> -alwaysUseWeightedIxn -writeFullDetails -weightMonte -allTarg
>>>> -allLookup -clean -cleanResultsTable
>>>>
>>>> JobTracker shows error:
>>>> ""
>>>> 12/01/23 00:51:31 INFO mapred.JobClient: Running job: job_201201212243_0009
>>>> 12/01/23 00:51:32 INFO mapred.JobClient:  map 0% reduce 0%
>>>> 12/01/23 00:51:40 INFO mapred.JobClient: Task Id :
>>>> attempt_201201212243_0009_m_000174_0, Status : FAILED
>>>> java.lang.Throwable: Child Error
>>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>>> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>>>>        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>> """
>>>>
>>>> TaskTracker log:
>>>> """
>>>> Could not find the main class: .  Program will exit.
>>>> Exception in thread "main" java.lang.NoClassDefFoundError:
>>>> Caused by: java.lang.ClassNotFoundException:
>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
>>>>        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
>>>> Could not find the main class: .  Program will exit.
>>>> """
>>>
>>> Thats a pretty basic failure; it couldn't find basic class java class
>>> in classpath.  Can you dig in more on this?  You've seen this:
>>> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#classpath
>>>
>> I will first install Hadoop0.20.205 and try again.
>>
>>> St.Ack
>>>
>>>>
>>>> The real entry is the main() in SmartRunner.class
>>>>  jar tf ./bin/../dist/xxxxxx.jar|grep SmartRunner
>>>> pMapReduce/SmartRunner.class
>>>>
>>>> Can anyone help me, thanks a lot.
>>>> --
>>>> Best Regards,
>>>> --
>>>> Fei Dong
>>
>>
>>
>> --
>> Best Regards,
>> --
>> Fei Dong
>
>
>
> --
> Best Regards,
> --
> Fei Dong



-- 
Harsh J
Customer Ops. Engineer, Cloudera

Reply via email to